Tag Archives: accessibility

Building A Simple AI Chatbot With Web Speech API And Node.js

Using voice commands has become pretty ubiquitous nowadays, as more mobile phone users use voice assistants such as Siri and Cortana, and as devices such as Amazon Echo and Google Home have been invading our living rooms.

Building A Simple AI Chatbot With Web Speech API And Node.js

These systems are built with speech recognition software that allows their users to issue voice commands. Now, our web browsers will become familiar with to Web Speech API, which allows users to integrate voice data in web apps.

The post Building A Simple AI Chatbot With Web Speech API And Node.js appeared first on Smashing Magazine.

Taken from:

Building A Simple AI Chatbot With Web Speech API And Node.js

Accessibility: Improving The UX For Color-Blind Users

According to Colour Blind Awareness 4.5% of the population are color-blind. If your audience is mostly male this increases to 8%. Designing for color-blind people can be easily forgotten because most designers aren’t color-blind. In this article I provide 13 tips to improve the experience for color-blind people – something which can often benefit people with normal vision too.
What Is Color Blindness? There are many types of color blindness but it comes down to not seeing color clearly, getting colors mixed up, or not being able to differentiate between certain colors.

Read the article – 

Accessibility: Improving The UX For Color-Blind Users

Improving The Color Accessibility For Color-Blind Users

According to Colour Blind Awareness 4.5% of the population are color-blind. If your audience is mostly male this increases to 8%. Designing for color-blind people can be easily forgotten because most designers aren’t color-blind. In this article I provide 13 tips to improve the experience for color-blind people – something which can often benefit people with normal vision too.
What Is Color Blindness? There are many types of color blindness but it comes down to not seeing color clearly, getting colors mixed up, or not being able to differentiate between certain colors.

Link:  

Improving The Color Accessibility For Color-Blind Users

Designing A Dementia-Friendly Website


Some well-established web design basics: minimize the number of choices that someone has to make; create self-explanatory navigation tools; help people get to what they’re looking for as quickly as possible. Sounds simple enough? Now consider this…

Designing A Dementia-Friendly Website

An ever growing number of web users around the world are living with dementia. They have very varied levels of computer literacy and may be experiencing some of the following issues: memory loss, confusion, issues with vision and perception, difficulties sequencing and processing information, reduced problem-solving abilities, or problems with language.

The post Designing A Dementia-Friendly Website appeared first on Smashing Magazine.

Read this article:

Designing A Dementia-Friendly Website

Making Accessibility Simpler, With Ally.js

I’ve been a web developer for 15 years, but I’d never looked into accessibility. I didn’t know enough people with (serious) disabilities to properly understand the need for accessible applications and no customer has ever required me to know what ARIA is. But I got involved with accessibility anyway – and that’s the story I’d like to share with you today.
At the Fronteers Conference in October 2014 I saw Heydon Pickering give a talk called “Getting nowhere with CSS best practices”.

View original article: 

Making Accessibility Simpler, With Ally.js

Notes On Client-Rendered Accessibility

As creators of the web, we bring innovative, well-designed interfaces to life. We find satisfaction in improving our craft with each design or line of code. But this push to elevate our skills can be self-serving: Does a new CSS framework or JavaScript abstraction pattern serve our users or us as developers?

If a framework encourages best practices in development while also improving our workflow, it might serve both our users’ needs and ours as developers. If it encourages best practices in accessibility alongside other areas, like performance, then it has potential to improve the state of the web.

Despite our pursuit to do a better job every day, sometimes we forget about accessibility, the practice of designing and developing in a way that’s inclusive of people with disabilities. We have the power to improve lives through technology — we should use our passion for the craft to build a more accessible web.

These days, we build a lot of client-rendered web applications, also known as single-page apps, JavaScript MVCs and MV-whatever. AngularJS, React, Ember, Backbone.js, Spine: You may have used or seen one of these JavaScript frameworks in a recent project. Common user experience-related characteristics include asynchronous postbacks, animated page transitions, and dynamic UI filtering. With frameworks like these, creating a poor user experience for people with disabilities is, sadly, pretty easy. Fortunately, we can employ best practices to make things better.

In this article, we will explore techniques for building accessible client-rendered web applications, making our jobs as web creators even more worthwhile.

Clueless character making 'Whatever' gesture1
MV-whatever. (Show animated Gif2)

Semantics

Front-end JavaScript frameworks make it easy for us to create and consume custom HTML tags like <pizza-button>, which you’ll see in an example later on. React, AngularJS and Ember enable us to attach behavior to made-up tags with no default semantics, using JavaScript and CSS. We can even use Web Components3 now, a set of new standards holding both the promise of extensibility and a challenge to us as developers. With this much flexibility, it’s critical for users of assistive technologies such as screen readers that we use semantics to communicate what’s happening without relying on a visual experience.

Consider a common form control4: A checkbox opting you out of marketing email is pretty significant to the user experience. If it isn’t announced as “Subscribe checked check box” in a screen reader, you might have no idea you’d need to uncheck it to opt out of the subscription. In client-side web apps, it’s possible to construct a form model from user input and post JSON to a server regardless of how we mark it up — possibly even without a <form> tag. With this freedom, knowing how to create accessible forms is important.

To keep our friends with screen readers from opting in to unwanted email, we should:

  • use native inputs to easily announce their role (purpose) and state (checked or unchecked);
  • provide an accessible name using a <label>, with id and for attribute pairing — aria-label on the input or aria-labelledby pointing to another element’s id.
<form>
  <label for="subscribe">
    Subscribe
  </label>
  <input type="checkbox" id="subscribe" checked>
</form>

Native Checkbox With Label

If native inputs can’t be used (with good reason), create custom checkboxes with role=checkbox, aria-checked, aria-disabled and aria-required, and wire up keyboard events. See the W3C’s “Using WAI-ARIA in HTML385.”

Custom Checkbox With ARIA

<form>
  <some-checkbox role="checkbox" tabindex="0" aria-labelledby="subscribe" aria-checked="true">
  </some-checkbox>
  <some-label id="subscribe">Subscribe</some-label>
</form>

Form inputs are just one example of the use of semantic HTML6 and ARIA attributes to communicate the purpose of something — other important considerations include headings and page structure, buttons, anchors, lists and more. ARIA7, or Accessible Rich Internet Applications, exists to fill in gaps where accessibility support for HTML falls short (in theory, it can also be used for XML or SVG). As you can see from the checkbox example, ARIA requirements quickly pile up when you start writing custom elements. Native inputs, buttons and other semantic elements provide keyboard and accessibility support for free. The moment you create a custom element and bolt ARIA attributes onto it, you become responsible for managing the role and state of that element.

Although ARIA is great and capable of many things, understanding and using it is a lot of work. It also doesn’t have the broadest support. Take Dragon NaturallySpeaking8 — this assistive technology, which people use all the time to make their life easier, is just starting to gain ARIA support. Were I a browser implementer, I’d focus on native element support first, too — so it makes sense that ARIA might be added later. For this reason, use native elements, and you won’t often need to use ARIA roles or states (aria-checked, aria-disabled, aria-required, etc.). If you must create custom controls, read up on ARIA to learn the expected keyboard behavior9 and how to use attributes correctly.

Tip: Use Chrome’s Accessibility Developer Tools3710 to audit your code for errors, and you’ll get the bonus “Accessibility Properties” inspector.

AngularJS material in Chrome with accessibility inspector open11
AngularJS material in Chrome with accessibility inspector open. (View large version12)

Web Components and Accessibility

An important topic in a discussion on accessibility and semantics is Web Components, a set of new standards landing in browsers that enable us to natively create reusable HTML widgets. Because Web Components are still so new, the syntax is majorly in flux. In December 2014, Mozilla said it wouldn’t support HTML imports13, a seemingly obvious way to distribute new components; so, for now that technology is natively available in Chrome and Opera14 only. Additionally, up for debate is the syntax for extending native elements (see the discussion about is="" syntax15), along with how rigid the shadow DOM boundary should be. Despite these changes, here are some tips for writing semantic Web Components:

  • Small components are more reusable and easier to manage for any necessary semantics.
  • Use native elements within Web Components to gain behavior for free.
  • Element IDs within the shadow DOM do not have the same scope as the host document.
  • The same non-Web Component accessibility guidelines apply.

For more information on Web Components and accessibility, have a look at these articles:

Interactivity

Native elements such as buttons and inputs come prepackaged with events and properties that work easily with keyboards and assistive technologies. Leveraging these features means less work for us. However, given how easy JavaScript frameworks and CSS make it to create custom elements, such as <pizza-button>, we might have to do more work to deliver pizza from the keyboard if we choose to mark it up as a new element. For keyboard support, custom HTML tags need:

  • tabindex, preferably 0 so that you don’t have to manage the entire page’s tab order (WebAIM discusses this19);
  • a keyboard event such as keypress or keydown to trigger callback functions.

Focus Management

Closely related to interactivity but serving a slightly different purpose is focus management. The term “client-rendered” refers partly to a single-page browsing experience where routing is handled with JavaScript and there is no server-side page refresh. Portions of views could update the URL and replace part or all of the DOM, including where the user’s keyboard is currently focused. When this happens, focus is easily lost, creating a pretty unusable experience for people who rely on a keyboard or screen reader.

Imagine sorting a list with your keyboard’s arrow keys. If the sorting action rebuilds the DOM, then the element that you’re using will be rerendered, losing focus in the process. Unless focus is deliberately sent back to the element that was in use, you’d lose your place and have to tab all the way down to the list from the top of the page again. You might just leave the website at that point. Was it an app you needed to use for work or to find an apartment? That could be a problem.

In client-rendered frameworks, we are responsible for ensuring that focus is not lost when rerendering the DOM. The easy way to test this is to use your keyboard. If you’re focused on an item and it gets rerendered, do you bang your keyboard against the desk and start over at the top of the page or gracefully continue on your way? Here is one focus-management technique from Distiller20 using Spine, where focus is sent back into relevant content after rendering:

class App.FocusManager
constructor:
$(‘body’).on ‘focusin’, (e) =>
@oldFocus = e.target
App.bind 'rendered', (e) =>
return unless @oldFocus

if @oldFocus.getAttribute('data-focus-id')
@_focusById()
else
@_focusByNodeEquality()

_focusById: ->
focusId = @oldFocus.getAttribute('data-focus-id')
newFocus = document.querySelector("##focusId")
App.focus(newFocus) if newFocus

_focusByNodeEquality: ->
allNodes = $('body *:visible').get()
for node in allNodes
if App.equalNodes(node, @oldFocus)
App.focus(node)

In this helper class, JavaScript (implemented in CoffeeScript) binds a focusin listener to document.body that checks anytime an element is focused, using event delegation21, and it stores a reference to that focused element. The helper class also subscribes to a Spine rendered event, tapping into client-side rendering so that it can gracefully handle focus. If an element was focused before the rendering happened, it can focus an element in one of two ways. If the old node is identical to a new one somewhere in the DOM, then focus is automatically sent to it. If the node isn’t identical but has a data-focus-id attribute on it, then it looks up that id’s value and sends focus to it instead. This second method is useful for when elements aren’t identical anymore because their text has changed (for example, “item 1 of 5” becoming labeled off screen as “item 2 of 5”).

Each JavaScript MV-whatever framework will require a slightly different approach to focus management. Unfortunately, most of them won’t handle focus for you, because it’s hard for a framework to know what should be focused upon rerendering. By testing rendering transitions with your keyboard and making sure focus is not dropped, you’ll be empowered to add support to your application. If this sounds daunting, inquire in your framework’s support community about how focus management is typically handled (see React’s GitHub repo22 for an example). There are people who can help!

Cat 'helping' by laying on keyboard23
Cat “helping”. (View animated Gif24)

Notifying The User

There is a debate about whether client-side frameworks are actually good for users25, and plenty of people have an opinion26 on them. Clearly, most client-rendered app frameworks could improve the user experience by providing easy asynchronous UI filtering, form validation and live content updates. To make these dynamic updates more inclusive, developers should also update users of assistive technologies when something is happening away from their keyboard focus.

Imagine a scenario: You’re typing in an autocomplete widget and a list pops up, filtering options as you type. Pressing the down arrow key cycles through the available options, one by one. One technique to announce these selections would be to append messages to an ARIA live region27, a mechanism that screen readers can use to subscribe to changes in the DOM. As long as the live region exists when the element is rendered, any text appended to it with JavaScript will be announced (meaning you can’t add bind aria-live and add the first message at the same time). This is essentially how Angular Material28’s autocomplete handles dynamic screen-reader updates:

<md-autocomplete md-selected-item="ctrl.selectedItem" aria-disabled="false">
<md-autocomplete-wrap role="listbox">
  <input type="text" aria-label="ariaLabel}" aria-owns="ul_001">
</md-autocomplete-wrap>
<ul role="presentation" id="ul_001">
  <li ng-repeat="(index, item) in $mdAutocompleteCtrl.matches" role="option" tabIndex="0">
</ul>
<aria-status class="visually-hidden" role="alert">
  <p ng-repeat="message in messages">message}</p>
</aria-status>
</md-autocomplete> 

In the simplified code above (the full directive29 and related controller30 source are on GitHub), when a user types in the md-autocomplete text input, list items for results are added to a neighboring unordered list. Another neighboring element, aria-status, gets its aria-live functionality from the alert role. When results appear, a message is appended to aria-status announcing the number of items, “There is one match” or “There are four matches,” depending on the number of options. When a user arrows through the list, that item’s text is also appended to aria-status, announcing the currently highlighted item without the user having to move focus from the input. By curating the list of messages sent to an ARIA live region, we can implement an inclusive design that goes far beyond the visual. Similar regions can be used to validate forms.

For more information on accessible client-side validation, read Marco Zehe’s “Easy ARIA Tip #3: aria-invalid and Role alert31” or Deque’s post on accessible forms32.

Conclusion

So far, we’ve talked about accessibility with screen readers and keyboards. Also consider readability: This includes color contrast, readable fonts and obvious interactions. In client-rendered applications, all of the typical web accessibility principles33 apply, in addition to the specific ones outlined above. The resources listed below will help you incorporate accessibility in your current or next project.

It is up to us as developers and designers to ensure that everyone can use our web applications. By knowing what makes an accessible user experience, we can serve a lot more people, and possibly even make their lives better. We need to remember that client-rendered frameworks aren’t always the right tool for the job. There are plenty of legitimate use cases for them, hence their popularity. There are definitely drawbacks to rendering everything on the client34. However, even as solutions for seamless server- and client-side rendering improve over time, these same accessibility principles of focus management, semantics and alerting the user will remain true, and they will enable more people to use your apps. Isn’t it cool that we can use our craft to help people through technology?

Resources

Thanks to Heydon Pickering for reviewing this article.

(hp, al, ml)

Footnotes

  1. 1 https://www.smashingmagazine.com/wp-content/uploads/2015/05/whatever.gif
  2. 2 https://www.smashingmagazine.com/wp-content/uploads/2015/05/whatever.gif
  3. 3 http://www.smashingmagazine.com/2014/03/04/introduction-to-custom-elements/
  4. 4 http://webaim.org/techniques/forms/controls
  5. 5 http://www.w3.org/TR/aria-in-html/
  6. 6 http://webaim.org/techniques/semanticstructure/
  7. 7 http://www.w3.org/TR/wai-aria/
  8. 8 http://assistivetechnology.about.com/od/SpeechRecognition/p/Dragon-Naturallyspeaking-As-Assistive-Technology.htm
  9. 9 http://www.w3.org/WAI/PF/aria-practices/#keyboard
  10. 10 https://chrome.google.com/webstore/detail/accessibility-developer-t/fpkknkljclfencbdbgkenhalefipecmb
  11. 11 https://www.smashingmagazine.com/wp-content/uploads/2015/04/02-RaWOSKs-opt.png
  12. 12 https://www.smashingmagazine.com/wp-content/uploads/2015/04/02-RaWOSKs-opt.png
  13. 13 https://hacks.mozilla.org/2014/12/mozilla-and-web-components/
  14. 14 http://caniuse.com/#feat=imports
  15. 15 https://lists.w3.org/Archives/Public/public-webapps/2015JanMar/0361.html
  16. 16 http://unobfuscated.blogspot.com/2015/03/polymer-and-web-component-accessibility.html
  17. 17 http://www.paciellogroup.com/blog/2014/09/web-components-punch-list/
  18. 18 https://www.polymer-project.org/0.5/articles/accessible-web-components.html
  19. 19 http://webaim.org/techniques/keyboard/tabindex
  20. 20 http://drinkdistiller.com
  21. 21 http://learn.jquery.com/events/event-delegation/
  22. 22 https://github.com/facebook/react/issues/1791#issuecomment-82987932
  23. 23 https://www.smashingmagazine.com/wp-content/uploads/2015/05/cat-helping.gif
  24. 24 https://www.smashingmagazine.com/wp-content/uploads/2015/05/cat-helping.gif
  25. 25 http://tantek.com/2015/069/t1/js-dr-javascript-required-dead
  26. 26 https://adactio.com/journal/8245
  27. 27 https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/ARIA_Live_Regions
  28. 28 https://material.angularjs.org/
  29. 29 https://github.com/angular/material/blob/master/src/components/autocomplete/js/autocompleteDirective.js#L43
  30. 30 https://github.com/angular/material/blob/master/src/components/autocomplete/js/autocompleteController.js
  31. 31 https://www.marcozehe.de/2008/07/16/easy-aria-tip-3-aria-invalid-and-role-alert/
  32. 32 http://www.deque.com/blog/accessible-client-side-form-validation-html5-wai-aria/
  33. 33 http://webaim.org/intro/
  34. 34 http://alistapart.com/article/let-links-be-links
  35. 35 http://www.smashingmagazine.com/2014/10/22/color-contrast-tips-and-tools-for-accessibility/
  36. 36 http://webaim.org/resources/designers/
  37. 37 https://chrome.google.com/webstore/detail/accessibility-developer-t/fpkknkljclfencbdbgkenhalefipecmb
  38. 38 http://www.w3.org/TR/aria-in-html/
  39. 39 http://substantial.com/blog/2014/07/22/how-i-audit-a-website-for-accessibility/
  40. 40 http://angularjs.blogspot.com/2014/11/using-ngaria.html
  41. 41 http://marcysutton.com/angular-protractor-accessibility-plugin/

The post Notes On Client-Rendered Accessibility appeared first on Smashing Magazine.

Credit:

Notes On Client-Rendered Accessibility

“It’s Alive!”: Apps That Feed Back Accessibly

It’s one thing to create a web application and quite another to create an accessible web application. That’s why Heydon Pickering1, both author and editor at Smashing Magazine, wrote an eBook Apps For All: Coding Accessible Web Applications2, outlining the roadmap for the accessible applications we should all be making.

The following is an extract from the chapter “It’s Alive” from Heydon’s book, which explores how to use ARIA live regions. Javascript applications are driven by events and the user should be informed of what important events are happening in the interface. Live regions help us provide accessible messaging systems, keeping users informed of events in a way that is compatible with assistive technologies.

Getting The Message

Picture the scene: it’s a day like any other and you’re at your desk, enclosed in a semicircular bank of monitors that make up your extended desktop, intently cranking out enterprise-level CSS for MegaDigiSpaceHub Ltd. You are one of many talented front-end developers who share this floor in your plush London office.

You don’t know it, but a fire has broken out on the floor below you due to a “mobile strategist” spontaneously combusting. Since no expense was spared on furnishing the office with adorable postmodern ornaments, no budget remained for installing a fire alarm system. It is up to the floor manager in question to travel throughout the office, warning individual departments in person.

He does this by walking silently into each room, holding a business card aloft with the word “fire” written on it in 12pt Arial for a total of three seconds, then leaving. You and the other developers — ensconced behind your monitors — have no idea he even visited the room.

Three monitors for coding
Three monitors for coding

What I cover in my eBook is, for the most part, about making using your websites and applications accessible. That is, we’re concerned with everyone being able to do things with them easily. However, it is important to acknowledge that when something is done (or simply happens), something else will probably happen as a result: there are actions and reactions.

“When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction to that of the first body.”

– Newton’s third law of motion (Newton’s laws of motion, Wikipedia3)

Providing feedback to users, to confirm the course they’ve taken, address the result of a calculation they’ve made or to insert helpful commentary of all sorts, is an important part of application design. The problem which needs to be addressed is that interrupting a user visually, by making a message appear on screen, is a silent occurrence. It is also one which — in the case of dialogs — often involves the activation of an element that originates from a completely remote part of the document, many DOM nodes away from the user’s location of focus.

To address these issues and to ensure users (unlike the poor developers in the introductory story) get the message, ARIA provides live regions4. As their name suggests, live regions are elements whose contents may change in the course of the application’s use. They are living things, so don’t always stand still. By adorning them with the appropriate ARIA attributes, these regions will interrupt the user to announce their changes as they happen.

In the following example, we will look at how to alert users to changes which they didn’t ask for, but — like the building being on fire — really ought to know about anyway.

Alert!

Perhaps the only thing worse than a fire that could happen to the office of a web development company would be losing connectivity to the web. Certainly, if I was working using an online application, I’d like to know the application will no longer behave in the way I expect and perhaps store my data properly. This is why Google Mail inserts a warning whenever you go offline. As noted in Marco Zehe’s 2008 blog post5, Google was an early adopter of ARIA live regions.

Yellow box reads unable to reach G mail please check your internet connection
Yellow box reads unable to reach G mail please check your internet connection.

We are going to create a script which tests whether the user is online or off and uses ARIA to warn screen reader users of the change in this status so they know whether it’s worth staying at their desk or giving up and going for a beer.

The Setup

For live regions, ARIA provides a number of values for both the role and aria-live attributes. This can be confusing because there is some crossover between the two and some screen readers only support either the role or aria-live alternatives. It’s OK, there are ways around this.

At the most basic level, there are two common types of message:

  1. “This is pretty important but I’m going to wait and tell you when you’re done doing whatever it is you’re doing.”
  2. “Drop everything! You need to know this now or we’re all in big trouble. AAAAAAAAAAGHH!”

Mapped to the respective role and aria-live attributes, these common types are written as follows:

  1. “This is pretty important but I’m going to wait and tell you when you’re done doing whatever it is you’re doing.” (aria-live="polite" or role="status")
  2. “Drop everything! You need to know this now or we’re all in big trouble. AAAAAAAAAAGHH.” (aria-live="assertive" or role="alert")

When marking up our own live region, we’re going to maximize compatibility by putting both of the equivalent attributes and values in place. This is because, unfortunately, some user agents do not support one or other of the equivalent attributes. More detailed information on maximizing compatibility6 of live regions is available from Mozilla.

Since losing internet connectivity is a major disaster, we’re going to use the more aggressive form.

<div id="message" role="alert" aria-live="assertive" class="online">
    <p>You are online.</p>
</div>

The code above doesn’t alert in any way by itself — the contents of the live region would have to dynamically change for that to take place. The script below will run a check to see if it can load test_resource.html every three seconds. If it fails to load it, or it has failed to load it but has subsequently succeeded, it will update the live region’s class value and change the wording of the paragraph. If you go offline unexpectedly, it will display <p>There’s no internets. Time to go to the pub!</p>.

The change will cause the contents of that #message live region to be announced, abruptly interrupting whatever else is currently being read on the page.

// Function to run when going offline

var offline = function() 
  if (!$('#message').hasClass('offline')) 
    $('#message') // the element with [role="alert"] and
[aria-live="assertive"]
.attr('class', 'offline') .text('There's no internets. Go to the pub!'); } // Function to run when back online var online = function() if (!$('#message').hasClass('online')) $('#message') // the element with [role="alert"] and
[aria-live="assertive"]
.attr('class', 'online') .text('You are online.'); } // Test by trying to poll a file function testConnection(url) var xmlhttp = new XMLHttpRequest(); xmlhttp.onload = function() online(); xmlhttp.onerror = function() offline(); xmlhttp.open("GET",url,true); xmlhttp.send(); } // Loop the test every three seconds for "test_resource.html" function start() rand = Math.floor(Math.random()*90000) + 10000; testConnection('test_resource.html?fresh=' + rand); setTimeout(start, 3000); // Start the first test start();
Alert reads alert there’s no internets. Go to the pub.
Alert reads “Alert: there’s no internets. Go to the pub!”

There are more comprehensive ways to test to see if your application is online or not, including a dedicated script called offline.js7, but this little one is included for context. Note that some screen readers will prefix the announcement with “Alert!”, so you probably don’t want to include “Alert!” in the actual text as well, unless it’s really, really important information.

There is a demo of this example8 available.

test.css

We would like to maximize compatibility of live regions across browsers and assistive technologies. We can add a rule in our test.css to make sure equivalent attributes are all present like so:

[role="status"]:not([aria-live="polite"]), 
[role="alert"]:not([aria-live="assertive"]) 
	content: 'Warning: For better support, you should include
a politeness setting for your live region role using the
aria-live attribute'; [aria-live="polite"]:not([role="status"]), [aria-live="assertive"]:not([role="alert"]) content: 'Warning: For better support, you should
include a corresponding role for your aria-live
politeness setting';

I Want The Whole Story

“Taken out of context, I must seem so strange.”

– Fire Door by Ani DiFranco

By default, when the contents of a live region alter, only the nodes (HTML elements, to you and me) which have actually changed are announced. This is helpful behavior in most situations because you don’t want a huge amount of content reread to you just because a tiny part of it is different. In fact, if it’s all read out at once, how would you tell which part had changed? It would be like the memory tray game where you have to memorize the contents of a tray to recall which things were removed.

Tray full of bits of HTML
Tray full of bits of HTML

In some cases, however, a bit of context is desirable for clarification. This is where the aria-atomic attribute comes in. With no
aria-atomic set, or with an aria-atomic value of false, only the elements which have actually changed will be notified to the user. When aria-atomic is set to true, all of the contents of the element with aria-atomic set on it will be read.

The term atomic is a little confusing. To be true means to treat the contents of this element as one, indivisible thing (an atom), not to smash the element into little pieces (atoms). Whether or not you think atomic is a good piece of terminology, the expected behavior is what counts and it is the first of the two behaviors which is defined.

One atom compared to lots of atoms
One atom compared to lots of atoms

Gez Lemon offers a great example of aria-atomic9. In his example, we imagine an embedded music player which tells users what the currently playing track is, whenever it changes.

<div aria-live="polite" role="status" aria-atomic="true">
  <h3>Currently playing:</h3>
  <p>Jake Bugg — Lightning Bolt</p>
</div>

Even though only the name of the artist and song within the paragraph will change, because aria-atomic is set to true the whole region will be read out each time: “Currently playing: Jake Bugg — Lightning Bolt”. The “Currently playing” prefix is important for context.

Note that the politeness setting of the live region is polite not
assertive as in the previous example. If the user is busy reading something else or typing, the notification will wait until they have stopped. It isn’t important enough to interrupt the user, not least because it’s their playlist: they might recognize all the songs anyway.

Box showing a graphic equalizer which reads currently playing, Jake bug lightning bolt
Box showing a graphic equalizer which reads currently playing, Jake bug lightning bolt

The aria-atomic attribute doesn’t have to be used on the same element that defines the live region, as in Lemon’s example. In fact, you could use aria-atomic on separate child elements within the same region. According to the specification:

“When the content of a live region changes, user agents SHOULD examine the changed element and traverse the ancestors to find the first element with aria-atomic set, and apply the appropriate behavior.”

Supported States and Properties10

This means we could also include another block within our live region to tell users which track is coming up next.

<div aria-live="polite" role="status">

   <div aria-atomic="true">
     <h3>Currently playing:</h3>
     <p>Jake Bugg — Lightning Bolt</p>
   </div>

   <div aria-atomic="true">
     <h3>Next in queue:</h3>
     <p>Napalm Death — You Suffer</p>
   </div>

</div>

Now, when Jake Bugg’s Lightning Bolt is nearing an end, we update the <p> within the next in queue block to warn users that Napalm Death are ready to take the mic: “Next in queue: Napalm Death — You Suffer”. As Napalm Death begin to play, the currently playing block also updates with their credentials and at the next available juncture the user is reminded that the noise they are being subjected to is indeed Napalm Death.

aria-busy

I was a bit mischievous using Napalm Death’s You Suffer as an example track because, at 1.316 seconds long, the world’s shortest recorded song would have ended before the live region could finish telling you it had started! If every track was that short, the application would go haywire.

In cases where lots of complex changes to a live region must take place before the result would be understandable to the user, you can include the aria-busy attribute11. You simply set this to true while the region is busy updating and back to false when it’s done. It’s effectively the equivalent of a loading spinner used when loading assets in JavaScript applications.

Typical loading spinner labelled ARIA atomic true
Typical loading spinner labelled ARIA atomic true

Usually you set aria-busy="true" before the first element (or addition) in the live region is loaded or altered, and false when the last expected element has been dealt with. In the case of our music player example, we’d probably want to set a timeout of ten seconds or so, making sure only music tracks longer than the announcement of those tracks get announced.

$('#music-info').attr('aria-busy', 'true');

// Update the song artist & title here, then...

setTimeout(function() 
   $('#music-info').attr('aria-busy', 'false');
, 10000);

Buy The eBook

That concludes your extract from “It’s Alive!”, a chapter which goes on to explore the intricacies of designing accessible web-based dialogs. But that’s not all. There’s plenty more about creating accessible experiences in the book, from basic button control design to ARIA tab interfaces and beyond. Reviews for the eBook and purchasing options are available here12. The inimitable Bruce Lawson has written a lovely post13 about it, too.

Footnotes

  1. 1 https://twitter.com/heydonworks
  2. 2 https://shop.smashingmagazine.com/apps-for-all-coding-accessible-web-applications.html
  3. 3 http://en.wikipedia.org/wiki/Newton%27s_laws_of_motion
  4. 4 https://developer.mozilla.org/en-US/docs/Accessibility/ARIA/ARIA_Live_Regions
  5. 5 http://www.marcozehe.de/2008/08/04/aria-in-gmail-1-alerts/
  6. 6 https://developer.mozilla.org/en-US/docs/Accessibility/ARIA/ARIA_Live_Regions
  7. 7 http://github.hubspot.com/offline/docs/welcome/
  8. 8 http://heydonworks.com/practical_aria_examples/#offline-alert
  9. 9 http://juicystudio.com/article/wai-aria_live-regions_updated.php
  10. 10 http://www.w3.org/TR/wai-aria/states_and_properties#aria-atomic
  11. 11 http://www.w3.org/TR/wai-aria/states_and_properties#aria-busy
  12. 12 https://shop.smashingmagazine.com/apps-for-all-coding-accessible-web-applications.html
  13. 13 http://www.brucelawson.co.uk/2014/apps-for-all-coding-accessible-web-applications-book-review/

The post “It’s Alive!”: Apps That Feed Back Accessibly appeared first on Smashing Magazine.

Originally posted here: 

“It’s Alive!”: Apps That Feed Back Accessibly

Thumbnail

Accessibility APIs: A Key To Web Accessibility

Web accessibility is about people. Successful web accessibility is about anticipating the different needs of all sorts of people, understanding your fellow web users and the different ways they consume information, empathizing with them and their sense of what is convenient and what frustratingly unnecessary barriers you could help them to avoid.

Armed with this understanding, accessibility becomes a cold, hard technical challenge. A firm grasp of the technology is paramount to making informed decisions about accessible design.

How do assistive technologies present a web application to make it accessible for their users? Where do they get the information they need? One of the keys is a technology known as the accessibility API (or accessibility application programming interface, to use its full formal title).

Reading The Screen

To understand the role of an accessibility API in making Web applications accessible, it helps to know a bit about how assistive technologies provide access to applications and how that has evolved over time.

A World of Text

With the text-based DOS operating system, the characters on the screen and the cursor position were held in a screen buffer in the computer’s memory. Assistive technologies could obtain this information by reading directly from the screen buffer or by intercepting signals being sent to a monitor. The information could then be manipulated — for example, magnified or converted into an alternative format such as synthetic speech.

Getting Graphic

The arrival of graphical interfaces such as OS/2, Mac OS and Windows meant that key information about what was on the screen could no longer be simply read from a buffer. Everything was now drawn on screen as a picture, including pictures of text. So, assistive technologies on those platforms had to find a new way to obtain information from the interface.

They dealt with this by intercepting the drawing calls sent to the graphics engine and using that information to create an alternate off-screen version of the interface. As applications made drawing calls through the graphics engine to draw text, carets, text highlights, drop-down windows and so on, information about the appearance of objects on the screen could be captured and stored in a database called an off-screen model. That model could be read by screen readers or used by screen magnifiers to zoom in on the user’s current point of focus within the interface. Rich Schwerdtfeger’s seminal 1991 article in Byte, “Making the GUI Talk1,” describes the then-emerging paradigm in detail.

Off-Screen Models

Recognizing the objects in this off-screen model was done through heuristic analysis. For example, the operating system might issue instructions to draw a rectangle on screen, with a border and some shapes inside it that represent text. A human might look at that object (in the context of other information on screen) and correctly deduce it is a button. The heuristics required for an assistive technology to make the same deduction are actually very complex, which causes some problems.

To inform a user about an object, an assistive technology would try to determine what the object is by looking for identifying information. For example, in a Windows application, the screen reader might present the Window Class name of an object. The assistive technology would also try to obtain information about the state of an object by the way it is drawn — for example, tracking highlighting might help deduce when an object has been selected. This works when an object’s role or state can easily be determined, but in many cases the relevant information is unclear, ambiguous or not available programmatically.

This reverse engineering of information is both fallible and restrictive. An assistive technology could implement support for a new feature only once it had been introduced into the operating system or application. An object might not convey useful information, and in any case it took some time to identify it, develop the heuristics needed to support it and then ship a new version of the screen reader. This created a delay between the introduction of new features and assistive technology’s ability to support it.

The off-screen model needs to shadow the graphics engine, but the engines don’t make this easy. The off-screen model has to independently calculate things like white-space management and alignment coordination, and errors would almost inevitably mount up. These errors could result in anomalies in the information conveyed to assistive technology users or in garbage buildup and memory leaks that lead to crashes.

Accessibility APIs

From the late 1990s, operating system accessibility APIs were introduced as a more reliable way to pass information to assistive technologies. Instead of applying complex heuristics to determine what an on-screen object might be, assistive technologies could query the accessibility API for specific information about each object. Authors could now provide the necessary information about an application in a form that they knew assistive technology would understand.

An accessibility API represents objects in a user interface, exposing information about each object within the application. Typically, there are several pieces of information for an object, including:

  • its role (for example, it might be a button, an application window or an image);
  • a name that identifies it within the interface (if there is a visible label like text on a button, this will typically be its name, but it could be encoded directly in the object);
  • its state or current condition (for example, a checkbox might currently be selected, partially selected or not selected).

The first platform accessibility API, Microsoft Active Accessibility (MSAA), was made available in a 1997 update to Windows 95. MSAA provided information about the role and state of objects and some of their properties. But it gave no access to things like text formatting, and the relationships between objects in the interface were difficult or impossible to determine.

In 1998, IBM and Sun Microsystems built a cross-platform accessibility API for Java. Java Swing 1.0 gave access to rich text information, relationships, tables, hyperlinks and more. The Java Jive screen reader, built on this platform, was the first time a screen reader’s information about the components of a user interface included role, state and associated properties, as well as rich text formatting details.

Notably, Java Jive was written by three developers in roughly five months; developing a screen reader through an off-screen model typically took several years.

Accessibility APIs Go Mainstream

In 2001 the Assistive Technology Service Provider Interface (AT-SPI) for Linux was released, based on the work done on Java, and in 2002 Apple included the NSAccessibility protocol with Mac OS X (10.2 Jaguar).

Meanwhile on Windows, the situation was getting complicated. Microsoft shipped the User Interface Automation (UIA) API as part of Windows 7, while IBM released IAccessible2 as an open standard for Windows and Linux, again evolved from the work done on Java.

Accessibility APIs existed for mobile platforms before touchscreen smartphones became dominant, but in 2009 Apple added the UI Accessibility API to iOS 3, and Android 1.6 (Donut) shipped with the Accessibility Framework.

By the beginning of 2015, Chrome OS stood out as the most mainstream platform lacking a standard accessibility API. But Google was beta testing its Automation API, intended to fill that gap in the platform.

Modern Accessibility APIs

In modern accessibility APIs, user interfaces are represented as a hierarchical tree. For example, an application window would contain several objects, the first of which might be a menu bar. The menu bar would contain a number of menus, each of which contains a number of menu items, and so on. The accessibility API describes an object’s relationship to other objects to provide context. For example, a radio button would probably be one “sibling” within a group.

Other features such as information about text formatting, applicable headers for content sections or table cells and things such as event notifications have all become commonplace in modern accessibility APIs.

Assistive technologies now make standard method calls to the operating system to get information about the objects on the screen. This is far more reliable, and far more efficient, than intercepting low-level operating system messages and trying to deconstruct them into something meaningful.

From The Web To The Accessibility API

In browsers, the platform accessibility API is used both to make information about the browser itself available to assistive technologies and to expose information about the currently rendered content.

Browsers typically support one or more of the available accessibility APIs for the platform they’re running on. For example, on Windows, Firefox, Chrome, Opera and Yandex support MSAA/IAccessible and IAccessible2, while Internet Explorer supports MSAA/IAccessible and UIAExpress. Safari and Chrome support NSAccessibility on OS X and UIAccessibility on iOS.

The browser uses the HTML DOM, along with further information derived from CSS, to generate an accessibility tree hierarchy of the content it is displaying, and it passes that information to the platform accessibility API. Information such as the role, name and state of each object in the content, as well as how it relates to other objects in the content, can then be queried by assistive technologies.

Let’s see how this works with some HTML:

<p><img src="mc.png" alt="My cat" longdesc="meeow.html">Rocks!</p>

We have an image, rendered as part of a paragraph. A browser exposes several pieces of information about the image to the accessibility API:

  1. It has a role of “image” (or “graphic” — details vary between platforms). This is implicitly determined from the fact that it is an HTML img element.
  2. Its name is “My cat”. For images, the name is typically derived from the alt attribute.
  3. A description is available on request, at the URL meeow.html (at the same “base” as the image).
  4. The parent is a paragraph element, with a role of “text.”
  5. The image has a “sibling” in the same container, the text node “Rocks!”

An assistive technology would query the accessibility API for this information, which it would present so the user can interact with it. For example, a screen reader might announce, “Graphic: My cat. Description available.”

(Does a cat picture need a full description? Perhaps not, but try explaining that to people who really want to tell you just how amazing and talented their feline friends actually are — or those of their readers who want to know all about what this cat looks like! Meanwhile, the philistines among us can ignore the extra information.)

Roles

Most HTML elements have what are called “roles,” which are a way of describing elements. If you are familiar with WAI-ARIA, you will be aware of the role attribute, which sets a role explicitly. Most elements already have implicit roles, however, which go along with the element type. For example:

  • <ul> and <ol> have “list” as implicit role,
  • <a> has “link” or “hyperlink” as implicit role,
  • <body> has “document” as implicit role.

These role mappings are being standardized and documented in the W3C’s “HTML Accessibility API Mappings2” specification.

Names

While roles are typically derived from the type of HTML element, the name (sometimes referred to as the “accessible name”) of an object often comes from one of several different sources. In the case of a form field, the name is usually taken from the label associated with the field:

<input type="radio" id="tequila" name="drinks" checked>
<label for="tequila">Reposado</label>

In this example, a button has the “radio button” role. Its accessible name will be “Reposado,” the text content of the label element. So, when a speech-recognition tool is instructed to “Click Radio button Reposado,” it can target the correct object within the interface.

The checked attribute indicates the state of the button, so that a screen reader can announce “Radio button Reposado Checked” or allow a user to navigate directly between the checked options in order to rapidly review a form that contains multiple sets of radio buttons.

Authors have an important role to play, providing the key information that assistive technologies need. If authors don’t do the “right thing,” assistive technologies must look in other places to try to get an accessible name — if there is no label, then a title or some text content might be near the radio button, or its relationship to other elements might help the user through context.

It is important to note that authors should not rely on an assistive technology’s ability to do this, because it is generally unreliable. It is a “repair” strategy that gives assistive technology users some chance of using a poorly authored page or website, such as the following:

<p>How good is reposado?<br>
<!--BAD CODE EXAMPLE: DON'T DO THIS-->
<input type="radio" id="fantastic" name="reposado" checked >
<label for="reposado">Fantastic</label><br>
<input type="radio" id="notBad" name="tequila"><br>
<input type="radio" id="meh" name="tequila" title="meh"> Meh

Faced with this case, a screen reader might provide information such as “second of three options,” based on information that the browser provides to the accessibility API about the form. Little else can be determined reliably from the code, though.

Nothing in the code associates the question with the set of radio buttons, and nothing informs the browser of what the accessible name for the first two buttons should be. The for and id attributes of the <label> and <input> for the first button do not share a common value, and nothing associates the nearby text content with the second button. The browser could use the title of the third button as an accessible name, but it duplicates the nearby text and unnecessarily bloats the code.

A well-authored version of this would use the fieldset element to group the radio buttons and use a legend element to associate the question with the group. Each of the buttons would also have a properly associated label.

<fieldset><legend>How good is reposado?</legend>
<!-- THIS IS A BETTER WAY TO CODE THE EXAMPLE -->
<input type="radio" id="fantastic" name="reposado" checked>
<label for="fantastic">Fantastic</label><br>
<input type="radio" id="notBad" name="reposado">
<label for="notBad">Not bad</label><br>
<input type="radio" id="meh" name="reposado">
<label for="meh">Meh</label><br>
</fieldset>

Making this information available through the accessibility API is more efficient and less prone to error than relying on assistive technologies to create an off-screen model or guess at the information they need.

Conclusion

Today’s technologies — operating systems, browsers and assistive technologies — work together to extract accessibility information from a web interface and appropriately present it to the user. If appropriate content semantics are not available, then assistive technologies will use old and unreliable techniques to make the interface usable.

The value of accessibility APIs is in allowing the operating system, browser and assistive technology to efficiently and reliably give users the information they need. It is now easy to make an interface developed with well-written HTML, CSS and JavaScript very accessible and usable for assistive technology users. A big part of accessibility is, therefore, an easily met responsibility of web developers: Know your job, use your tools well, and many pieces will fall into place as if by magic.

With thanks to Rich Schwerdtfeger, Steve Faulkner and Dominic Mazzoni.

(hp, al, ml)

Footnotes

  1. 1 http://www.paciellogroup.com/blog/2015/01/making-the-gui-talk-1991-by-rich-schwerdtfeger/
  2. 2 http://rawgit.com/w3c/aria/master/html-aam/html-aam.html

The post Accessibility APIs: A Key To Web Accessibility appeared first on Smashing Magazine.

Visit link:

Accessibility APIs: A Key To Web Accessibility

Thumbnail

Accessibility Originates With UX: A BBC iPlayer Case Study

Not long after I started working at the BBC, I fielded a complaint from a screen reader user who was having trouble finding a favorite show via the BBC iPlayer’s home page1. The website had recently undergone an independent accessibility audit which indicated that, other than the odd minor issue here and there, it was reasonably accessible.

I called the customer to establish what exactly the problem was, and together we navigated the home page using a screen reader. It was at that point I realized that, while all of the traditional ingredients of an accessible page were in place — headings, WAI ARIA Landmarks2, text alternatives and so on — it wasn’t very usable for a screen reader user.

The old iPlayer homepage3
iPlayer’s old home page. (View large version4)

The first issue was that the subnavigation was made up of only two links: “TV” and “Radio,” with links to other key areas such as “Categories,” “Channels” and “A to Z” buried further down the content order of the page, making them harder for the user to find.

The old iPlayer homepage with Categories, Channels and A to Z highlighted5
iPlayer’s old home page showing “Categories,” “Channels” and “A to Z” far down the content order. (View large version6)

The second issue was how verbose the page was to the screen reader user. Instead of hearing a link to a program once, the program would be announced twice because the thumbnail image and the heading for the program were presented as two separate links. This made the page longer to listen to and was confusing because links to the same destination were worded differently.

Duplicated links highlighted on the old iPlayer homepage7
iPlayer’s old home page showing duplicate links. (View large version8)

Finally, keyboard access on the page was illogical. In the “Categories” area, for example, a single click on a category would reveal four items in a panel next to it. To access the full list of items in that category, you had to click again on the same link to be taken to a listing page. This was a major hurdle for the user and the place where the customer I was talking to gave up using the application altogether.

Categories, highlighted on the old iPlayer homepage9
iPlayer’s old home page showing the “Categories” links highlighted. (View large version10)

It was clear that, while the website had been built with accessibility in mind, it hadn’t been designed with accessibility in mind and this is where the issues originated.

The Challenge

At the BBC, a number of internal standards and guidelines are in place that teams are required to follow when delivering accessible website and mobile applications. Key ones are:

There is also a strong culture of accessibility; the BBC is a publicly funded organization14, and accessibility is considered central to its remit and is a stronger driver than any legal requirement. So, how did this happen?

Part of the issue is that standards and guidelines tend to focus more on code than design, more on output than outcome, more on compliance than experience. As such, technically compliant pages could be built that are not the most usable for disabled users.

It may not seem immediately obvious, but visual design can have a massive impact on users who cannot see the page. I often find that mobile applications and websites that are problematic to make accessible are the ones where the visual design, by dictating structure, does not allow it.

This does not mean that standards and guidelines are redundant — far from it. But what we have found at the BBC is that standards need to sit within, and inform, an accessibility framework that runs through product management, user experience, development and quality assurance. As such, accessibility originates with UX. Most of the thinking and requirements should be considered up front so that poor accessibility isn’t designed in.

While redesigning the BBC iPlayer website, renewed focus was given to inclusive design, which, while adhering to the BBC’s standards and guidelines, is driven by four principles (more on that below). We then distilled our standards and guidelines to create a focused list of requirements for the UX to follow. We also started to train designers to annotate their own designs for accessibility.

UX Principles

Our four main principles are the following:

  • Give users choice.
  • Put users in control.
  • Design with familiarity in mind.
  • Prioritize features that add value.

Give Users Choice

Never assume that just because a user can access content one way that they want to access content in that one way. Because BBC’s iPlayer has “audio described” and “sign language” formats, it was never in any doubt that both of these should have their own dedicated listing pages, accessed via the “Categories” dropdown link. (Note that all on-demand content is subtitled, which is why there is no “Subtitled” category. Subtitles can be switched on in the media player.)

The Categories dowpdown with Audio Described and Signed sections15
The “Categories” dropdown with “Audio Described” and “Signed” sections. (View large version16)

User research and feedback indicated, however, that although people want dedicated categories, they also want to be able to search for and browse content in the same way that any other users would and to select their preferred format from there. I have stayed in touch over the years with the gentleman who complained about the old iPlayer page, and he’s said himself, “Don’t send us into disability silos!”

This means that from the outset the designs need to signpost “Audio Description” and “Signed” content via search results, A to Z, category and other listing pages. Not making any assumptions or not stereotyping users with disabilities is important — for instance, a person with a severe vision impairment might not always use audio descriptions; news, sports, music programs and live events often aren’t supported by audio description because commentators already provide enriched commentary.

Alternative formats shown in listing pages17
List pages such as search, shown here, indicate what formats programs are available in. (View large version18)

On-demand pages also list alternative formats, allowing users to choose what they want. Looking ahead, the option to choose your format could also be included in the Standard Media Player19 — the BBC media player used for on-demand and live streaming video across all BBC products, including iPlayer.

Playback pages showing high definition and audio described formats20
Screenshot of the playback page showing HD and AD formats. (View large version21)

Put Users in Control

Never taking control away from the user is essential. A key aspect of this in iPlayer, which is responsive, is not suppressing pinch zoom. Time and again in user testing, we have observed users zooming content, even on responsive websites, where text might be intentionally larger.

Due to an iOS bug that was rectified in iOS 6, the ability to pinch zoom was suppressed on many websites due to poor resizing when the orientation is changed from portrait to landscape. Now that this has been fixed, there is no reason to continue suppressing zoom.

Another aspect of control is autoplay. While iPlayer currently has autoplay for live content, this can be a problem because the sound of the video can make it difficult for a screen reader user to hear their reader’s output. However, we do know of screen reader users who request autoplay because it means they don’t have to navigate to the player, find the play button and activate play. The answer is to look at ways to give users control over playback by opting in or out of autoplay, such as by using a popup and saving preferences with cookies.

Design With Familiarity in Mind

There needs to be a balance between the new and the familiar. Users understand how to interact with pages and apps that use familiar design patterns. This is especially important in native apps for iOS and Android, where standard UI components come with accessibility built in.

Equally important is the language used across the BBC’s native iPlayer apps and responsive website. Where the platform allows, consistent labels for headings, links and buttons — not just visually, but also via alternatives for screen reader users — ensure that the experience is familiar and recognizably “BBC iPlayer,” regardless of the platform.

Tied into this, the new designs reinforce a logical heading structure within the code, which in turn supports navigation for screen reader users. Key to this is ensuring that the pattern used for the heading structure is repeated across pages, so that users do not find main headings in different places depending on what page they are on. While structure is typically viewed as a responsibility of developers, it needs to be decided before designs are signed off in order to prevent poor structure getting coded in — more on that later.

Prioritize Features That Add Value

Accessibility at the BBC is not just about meeting code, content and design requirements, but also about incorporating helpful features that add value for all users, including disabled users. A large proportion of feedback we get from our disabled users pertains to usability issues that could be experienced by anyone on some level but that seriously adversely affect disabled users. When we incorporate features to help users with specific disabilities, everyone gains access to a richer and easier experience.

One obstacle that comes up time and again is finding a favorite show. I’ve spoken with many screen reader users who say they save shortcuts to their favorite shows on their desktop but, due to changing URLs, often lose content. A simple way to address this that benefits all users is to ensure that there is a mechanism for saving favorites on the website. Adding in options to sort favorites and list them the way you want further improves this. It may sound unrelated to accessibility, but it was the single most requested feature received from disabled users. Simply accessing the favorites page to watch the latest episode of something, rather than having to search the website, makes all the difference.

Sorting favourites using A to Z and recent options22
The “Favourites” page, with options to sort by “A to Z” and “Recent”. (View large version23)

Finding ways to allow people to get to the content they want more quickly has also influenced what is available within the media player itself. Once an episode has finished playing, exiting the media player and navigating back to the website to find the next episode is a massive overhead for some users. Adding a “More” button to the player itself — showing the next episode or programs similar to the current one — cuts down on the amount of effort it takes users to find new content.

The Standard Media Player plug in for related content24
The “You may also like” plugin shows related content and next episodes within the Standard Media Player. (View large version25)

One key feature that has added value to BBC iPlayer’s native iOS and Android apps, as well as the website (when viewed in Chrome), is support for Google Chromecast26. Being able to control what content you view on TV without having to use a remote or complex TV user interface is invaluable. Using one’s device of choice, whether it be iOS or Android, is much easier for a disabled user than using a remote control and a potentially inaccessible TV interface.

Chromcast on BBC iPlayer27
BBC iPlayer and Chromecast. (View large version28)

Guidelines

The principles above exist to create a mindset that helps product owners and UX practitioners alike when shaping and designing inclusive products. In addition to the four principles, a set of guidelines is used to design more accessible interfaces. The following are a subset taken from the “BBC Mobile Accessibility Standards and Guidelines29”:

  1. Color contrast
    Ensure that text and backgrounds exceed the WCAG Double A 4.5:1 contrast minimum.
  2. Color and meaning
    Information conveyed with color must also be identifiable from context or markup.
  3. Content order
    Content order must be logical.
  4. Structure
    When supported by the platform, pages must provide a logical and hierarchical heading structure.
  5. Containers and landmarks
    When supported by the platform, page containers or landmarks should be used to describe page structure.
  6. Duplicate links
    Controls, objects and grouped interface elements must be represented as a single component.
  7. Touch target size
    Targets must be large enough to touch accurately (44 pixels).
  8. Spacing
    An inactive space must surround all active elements (unless they are large blocks exceeding 44 pixels).
  9. Zoom
    Where zoom is supported by the platform, it must not be suppressed.
  10. Actionable elements
    Links and other actionable elements must be clearly distinguishable.

The New iPlayer

Keeping in mind this backdrop of principles and guidelines, along with the renewed focus on adding value and features that enhance the experience for disabled users, here are a few of the changes introduced in the BBC’s new iPlayer:

The new BBC iPlayer homepage30
The BBC’s new iPlayer home page has better content order, search tools, structure and keyboard access. (View large version31)

At launch, the iPlayer’s navigation housed the BBC’s channels, a “TV Guide,” “Favourites” and “Categories.” These all sit at the start of the page, high up in the content order. While they are visually easy to see, they are also easily discoverable by screen reader users via a hidden heading and labeled navigation landmark:

<div role="navigation">
<h2>iPlayer navigation</h2>

Where previously the “Categories” were unusable for the screen reader user I spoke with, they are now prominent in the page and fully keyboard navigable. Since launch, the addition of more channels has meant that the channel links have been rehoused in their own dropdown menu.

Search tools have also been added, enabling users to carry out predictive search, browse A to Z or view their most recently watched program. This is all keyboard accessible, it makes use of headings, and it has landmarks where appropriate.

The home page carousel is also fully keyboard accessible. Each program in the stream is presented as one link, with the reading order of text starting with the primary information first: channel attribution, program name, episode information, abstract and program duration.

Work has also been carried out to improve visible focus and bring both the iPlayer website and the Standard Media Player in line with the BBC header and footer. The pink underline used for the hover and focus states in the main BBC navigation is now used within the Standard Media Player to indicate when a button is selected — for example, when the subtitles are switched on. This replaces the use of color only to indicate a selected state, which was indistinguishable from the hover and focus states.

BBC navigation hover and focus states32
The hover and focus pink underline used in the BBC header for iPlayer. (View large version33)
Hover and focus states used for the subtitle button on the Standard Media Player34
Active and inactive hover and focus states on the subtitle button in the Standard Media Player. (View large version35)

You can read more about what steps were taken to make iPlayer web-accessible36 and to make the Standard Media Player accessible37, including creation of an accessible media player in Flash38, on the BBC’s Internet Blog.

Annotated UX

All of the thinking around inclusive design that comes from product owners, UX practitioners and designers needs to be captured and communicated to developers and engineers. At the BBC, we are moving to a model where designs need to be annotated for accessibility. This includes:

  • headings,
  • containers,
  • content order,
  • color contrast,
  • alternatives to color and meaning,
  • visible focus,
  • keyboard and input interactions.
Annotated UX for the iPlayer homepage showing headings, lists, labels and content order39
An example of an annotated UX showing headings and labels. (View large version40)

The design above, showing an early version of the BBC One home page in iPlayer, outlines where the <h1> to <h6> headings should be. The UX practitioner doesn’t need an in-depth knowledge of code, but rather an understanding of the hierarchy of data within a page. As such, an equally acceptable approach would be to indicate the “main heading,” “secondary heading,” “third-level heading” and so on. Developers can then take this and translate it into semantic markup.

Equally, indicating the logical order of content helps developers to code content in the right sequence (i.e. source order) — something that is essential to a screen reader or sighted keyboard user’s comprehension of the page.

Annotating the UX in this way is key to identifying designs that don’t allow for a logical page structure, content order or behavior. It is the first step to generating a style guide that documents focus states, colors and so on. Further down the line, these requirements can also be used to generate user acceptance criteria and automated quality assurance tests.

Even if you’re working in an agile way, where designs are iterative and not delivered in a complete form, annotation still works. As long as the basic framework of the page is well defined, the visual design can evolve from that.

Summary

It’s very easy to get bogged down by accessible output and to forget that, ultimately, accessibility is about people. As such, keep the following in mind, whether you are working in product, UX, development or quality assurance:

  • Design with choice in mind.
  • Always give users control over the page.
  • Prioritize features that add value for disabled users.
  • Design with familiarity in mind.
  • Integrate accessibility into annotated UX and style guides.
  • Make no assumptions. Test ideas and concepts.

Fostering these key principles across the entire team will go a long way to ensuring that products are inclusive and usable for disabled people. Listening to users and actively including their feedback, along with adhering to organizational standards and guidelines, are essential.

(hp, il, al, ml)

Footnotes

  1. 1 http://www.bbc.co.uk/iplayer
  2. 2 http://www.w3.org/TR/wai-aria/roles#landmark_roles
  3. 3 http://www.smashingmagazine.com/wp-content/uploads/2015/02/101-iPlayerHomePage-opt.png
  4. 4 http://www.smashingmagazine.com/wp-content/uploads/2015/02/101-iPlayerHomePage-opt.png
  5. 5 http://www.smashingmagazine.com/wp-content/uploads/2015/02/102-iPlayerHomePage-opt.png
  6. 6 http://www.smashingmagazine.com/wp-content/uploads/2015/02/102-iPlayerHomePage-opt.png
  7. 7 http://www.smashingmagazine.com/wp-content/uploads/2015/02/103-iPlayerHomepage-opt.png
  8. 8 http://www.smashingmagazine.com/wp-content/uploads/2015/02/103-iPlayerHomepage-opt.png
  9. 9 http://www.smashingmagazine.com/wp-content/uploads/2015/02/104-iPlayerHomepage-opt.png
  10. 10 http://www.smashingmagazine.com/wp-content/uploads/2015/02/104-iPlayerHomepage-opt.png
  11. 11 http://www.bbc.co.uk/guidelines/futuremedia/accessibility/
  12. 12 http://www.bbc.co.uk/guidelines/futuremedia/accessibility/screenreader.shtml
  13. 13 http://www.bbc.co.uk/guidelines/futuremedia/accessibility/mobile_access.shtml
  14. 14 http://www.bbc.co.uk/corporate2/insidethebbc/whoweare
  15. 15 http://www.smashingmagazine.com/wp-content/uploads/2015/02/105-iPLayerHomePage-categories-opt.png
  16. 16 http://www.smashingmagazine.com/wp-content/uploads/2015/02/105-iPLayerHomePage-categories-opt.png
  17. 17 http://www.smashingmagazine.com/wp-content/uploads/2015/02/106-iPlayerListings-opt.png
  18. 18 http://www.smashingmagazine.com/wp-content/uploads/2015/02/106-iPlayerListings-opt.png
  19. 19 http://www.bbc.co.uk/blogs/internet/posts/Standard-Media-Player
  20. 20 http://www.smashingmagazine.com/wp-content/uploads/2015/02/107-iPlayerMediaPlayer-opt.png
  21. 21 http://www.smashingmagazine.com/wp-content/uploads/2015/02/107-iPlayerMediaPlayer-opt.png
  22. 22 http://www.smashingmagazine.com/wp-content/uploads/2015/02/108-iPlayerFavourites-opt.png
  23. 23 http://www.smashingmagazine.com/wp-content/uploads/2015/02/108-iPlayerFavourites-opt.png
  24. 24 http://www.smashingmagazine.com/wp-content/uploads/2015/02/109-iPlayerMediaPlayerPlugin-opt.png
  25. 25 http://www.smashingmagazine.com/wp-content/uploads/2015/02/109-iPlayerMediaPlayerPlugin-opt.png
  26. 26 http://www.bbc.co.uk/blogs/internet/posts/Accessibility-on-BBC-iPlayer-on-Chromecast
  27. 27 http://www.smashingmagazine.com/wp-content/uploads/2015/02/110-iPlayerChromecast-opt.jpg
  28. 28 http://www.smashingmagazine.com/wp-content/uploads/2015/02/110-iPlayerChromecast-opt.jpg
  29. 29 http://www.bbc.co.uk/guidelines/futuremedia/accessibility/mobile
  30. 30 http://www.smashingmagazine.com/wp-content/uploads/2015/02/111-iPLayerHomepage-opt.png
  31. 31 http://www.smashingmagazine.com/wp-content/uploads/2015/02/111-iPLayerHomepage-opt.png
  32. 32 http://www.smashingmagazine.com/wp-content/uploads/2015/02/112-iPlayerNavigationFocusState-opt.png
  33. 33 http://www.smashingmagazine.com/wp-content/uploads/2015/02/112-iPlayerNavigationFocusState-opt.png
  34. 34 http://www.smashingmagazine.com/wp-content/uploads/2015/02/113-iPlayerHoverStates-opt.png
  35. 35 http://www.smashingmagazine.com/wp-content/uploads/2015/02/113-iPlayerHoverStates-opt.png
  36. 36 http://www.bbc.co.uk/blogs/internet/posts/Making-the-new-iPlayer-accessible-for-all-users
  37. 37 http://www.bbc.co.uk/blogs/internet/posts/Standard-Media-Player-accessibility
  38. 38 http://www.bbc.co.uk/blogs/internet/posts/Creating-an-accessible-media-player-in-Flash
  39. 39 http://www.smashingmagazine.com/wp-content/uploads/2015/02/114-iPlayerCarousel-opt.png
  40. 40 http://www.smashingmagazine.com/wp-content/uploads/2015/02/114-iPlayerCarousel-opt.png

The post Accessibility Originates With UX: A BBC iPlayer Case Study appeared first on Smashing Magazine.

Read this article:  

Accessibility Originates With UX: A BBC iPlayer Case Study

Thumbnail

Enhancing User Experience With The Web Speech API

It’s an exciting time for web APIs, and one to watch out for is the Web Speech API. It enables websites and web apps not only to speak to you, but to listen, too. It’s still early days, but this functionality is set to open a whole array of use cases. I’d say that’s pretty awesome.

In this article, we’ll look at the technology and its proposed usage, as well as some great examples of how it can be used to enhance the user experience.

1
Image credit: Sebastian Schöld2

Disclaimer: This technology is pretty cutting-edge, and the specification is currently with the W3C as an “unofficial editor’s draft” (as of 6 June 2014). The likelihood that usage will differ slightly from the code snippets in this article is high. Checking the specification3 and testing thoroughly before releasing code are always wise.

Speech Synthesis

The API comes in two parts. To start, let’s look at the speech synthesis part, the bit that speaks to you. If your website has some textual content — whether body copy, forms inputs, alt tags, etc. — you could run some lovely functions and the device would speak the words to the user.

Let’s look at some of the code needed to make this happen. First, you would create a new instance of the SpeechSynthesisUtterance interface. Then, you would specify the text to be spoken. Then, you would add this instance to a queue, which tells the browser what to speak and when.

Below I have wrapped all of this in a function for us to call, named speak, with the text we want spoken as a parameter.

function speak(textToSpeak) 
   // Create a new instance of SpeechSynthesisUtterance
   var newUtterance = new SpeechSynthesisUtterance();

   // Set the text
   newUtterance.text = textToSpeak;

   // Add this text to the utterance queue
   window.speechSynthesis.speak(newUtterance);

All we need to do now is call this function and pass in some words to be spoken:

speak('Welcome to Smashing Magazine');

More functionality is included in SpeechSynthesisUtterance. You can stop, start and pause the queue, as well as set the language, rate and voice for each utterance. Stopping, starting or pausing an utterance fires an event that you can hook into, as does changing the voice. Plenty to play around with!

At the moment, speech synthesis is supported only in Chrome and Safari (both on desktop and mobile devices). Also, the voices available to you via the API largely depend on the operating system. Google has its own set of default voices for Chrome, available on Mac OS X, Windows and Ubuntu. However, Mac OS X’s voices are also available and, thus, are the same as in Safari on OSX. You can easily see which voices are available in the Developer Tools console:

window.speechSynthesis.getVoices();

Tip: If you’re on OS X, check out the voice “Zarvox.”

Speech Recognition

The other part of the Web Speech API is speech recognition, which enables the user to speak into the device’s microphone and have their speech recognized by the website or web app.

Let’s run through some code. This time, we’ll create a new instance of the SpeechRecognition interface. Because this part is supported only in Chrome, we’ll have to include the webkit prefix.

var newRecognition = webkitSpeechRecognition();

SpeechRecognition comes with quite a few attributes. One that we are likely to change is continuous, whose default state of false means that the browser will stop listening after a break in speech. If you want your website or web app to keep listening, then set the attribute to true:

newRecognition.continuous = true;

To start and stop speech recognition, call the start() and stop() methods:

// start recognition
newRecognition.start();

// stop recognition
newRecognition.stop();

Again, we can hook into plenty of events, such as soundstart, speechstart, result and error. I have prepared a demo4 that shows how to access the words detected, from the result event method. The code goes on to match the words spoken against some simple navigation, activating the appropriate link if detected.

Uses

Dictation

At the moment, the most common use of the Speech API is as a dictation or reading mechanism. That is, the user speaks into the mic and the device translates the speech into text (as demoed by Chrome’s development team5), or the user passes in text to be read out by the device.

Having a device speak out some information definitely has its advantages. Imagine your mirror telling you what the weather will be like first thing in the morning.

Plenty of car manufacturers have installed text-to-speech capabilities over the last couple of years. Imagine, in the not-too-distant future, your browser’s reading list being read out to you as you drive.

Voice Control

Dictation could easily be turned into voice control, as we saw with the recognition demo above, which could be modified to allow for navigation around a website. Add it to web-enabled TVs and we might just be living in the 2015 of Back to the Future 2.

I’m fortunate to work with some very talented colleagues, one of whom created a tennis scoring app. I was delighted to find that he could control the app with his voice, speaking the score out loud as he was playing a game.

Translation

Translation would look very different when done in real time. Someone could converse in one language, and another person’s device would speak out what is being said in their own language. Hook that up to a Bluetooth ear piece and eat your heart out Arthur Dent6. We’re getting a little closer to each person having their own Babel fish7.

Limitations

Offline capability needs more consideration. As it stands, Chrome sends the recorded audio to its servers and pings back the result. Thus, an Internet connection is needed for it to work — not ideal.

Conclusion

Nevertheless, it is still exciting, and the technology is opening up. I look forward to the day when looking for the remote is a thing of the past, and I can just tell the TV to stream the latest Sin City movie.

Would we actually use the web for this? Why not? It’s already universal. You can take the web and its speech wherever you go.

I have met some resistance when talking about this API. People either can’t see a need for it with the web, or they would feel uncomfortable talking to their device — both valid views. However, I hope I have inspired you to at least give it a go and think about it the next time you are building something. Start welcoming speech: It might be just what you’re listening for.

(ml, al, il)

Footnotes

  1. 1 http://slides.com/schold/web-speech-api#/
  2. 2 http://slides.com/schold/web-speech-api#/
  3. 3 https://dvcs.w3.org/hg/speech-api/raw-file/tip/webspeechapi.html
  4. 4 http://codepen.io/Rumyra/pen/bCphe
  5. 5 https://www.google.com/intl/en/chrome/demos/speech.html
  6. 6 http://en.wikipedia.org/wiki/Arthur_Dent
  7. 7 http://en.wikipedia.org/wiki/List_of_races_and_species_in_The_Hitchhiker%27s_Guide_to_the_Galaxy#Babel_fish

The post Enhancing User Experience With The Web Speech API appeared first on Smashing Magazine.

View original – 

Enhancing User Experience With The Web Speech API