Tag Archives: voice

Thumbnail

The Importance Of Manual Accessibility Testing




The Importance Of Manual Accessibility Testing

Eric Bailey



Earlier this year, a man drove his car into a lake after following directions from a smartphone app that helps drivers navigate by issuing turn-by-turn directions. Unfortunately, the app’s programming did not include instructions to avoid roads that turn into boat launches.

From the perspective of the app, it did exactly what it was programmed to do, i.e. to find the most optimal route from point A to point B given the information made available to it. From the perspective of the man, it failed him by not taking the real world into account.

The same principle applies for accessibility testing.

Designing For Accessibility And Inclusion

The more inclusive you are to the needs of your users, the more accessible your design is. Let’s take a closer look at the different lenses of accessibility through which you can refine your designs. Read article →

Automated Accessibility Testing

I am going to assume that you’re reading this article because you’re interested in learning how to test your websites and web apps to ensure they’re accessible. If you want to learn more about why accessibility is necessary, the topic has been covered extensively elsewhere.

Automated accessibility testing is a process where you use a series of scripts to test for the presence, or lack of certain conditions in code. These conditions are dictated by the Web Content Accessibility Guidelines (WCAG), a standard by the W3C that outlines how to make digital experiences accessible.

For example, an automated accessibility test might check to see if the tabindex attribute is present and if its value is greater than 0. The pseudocode would be something like:


A flowchart that asks if the tabindex value is present. If yes, it asks if the tabindex value is greater than 0. If it is greater than zero, it fails. If not, it passes. If no tabindex value is present, it also passes.

Failures can then be collected and used to generate reports that disclose the number, and severity of accessibility issues. Certain automated accessibility products can also integrate as a Continuous Integration or Continuous Deployment (CI/CD) tool, presenting just-in-time warnings to developers when they attempt to add code to a central repository.

These automated programs are incredible resources. Modern websites and web apps are complicated things that involve hundreds of states, thousands of lines of code, and complicated multi-screen interactions. It’d be absurd to expect a human (or a team of humans) to mind all the code controlling every possible permutation of the site, to say nothing of things like regressions, software rot, and A/B tests.

Automation really shines here. It can repeatedly and tirelessly pour over these details with perfect memory, at a rate far faster than any human is capable of.

However…

Automated accessibility tests aren’t a turnkey solution, nor are they a silver bullet. There are some limitations to keep in mind when using them.

Thinking To Think Of Things

One of both the best and worst aspects of the web is that there are many different ways to implement a solution to a problem. While this flexibility has kept the web robust and adaptable and ensured it outlived other competing technologies, it also means that you’ll sometimes see code that is, um, creatively implemented.

The test suite is only as good as what its author thought to check for. A naïve developer might only write tests for the happy path, where everyone writes semantic HTML, fault-tolerant JavaScript, and well-scoped CSS. However, this is the real world. We need to acknowledge that things like tight deadlines, unfamiliarity with the programming language, atypical user input, and sketchy 3rd party scripts exist.

For example, the automated accessibility testing site Tenon.io wisely includes a rule that checks to see if a form element has both a label element and an aria-label associated with it, and if the text strings for both declarations differ. If they do, it will flag it as an issue, as the visible label may be different than what someone would hear if they were navigating using a screen reader.

If you’re not using a testing service that includes this rule, it won’t be reported. The code will still “pass”, but it’s passing by omission, not because it’s actually accessible.

State

Some automated accessibility tests cannot parse the various states of interactive content. Critical parts of the user interface are effectively invisible to automation unless the test is run when the content is in an active, selected, or disabled state.

By interactive content, I mean things that the user has yet to take action on, or aren’t present when the page loads. Unopened modals, collapsed accordions, hidden tab content and carousel slides are all examples.

It takes sophisticated software to automatically test the various states of every component within a single screen, let alone across an entire web app or website. While it is possible to augment testing software with automated accessibility checks, it is very resource-intensive, usually requiring a dedicated team of engineers to set up and maintain.

“Valid” Markup

Accessible Rich Internet Applications (ARIA) is a set of attributes that extend HTML to allow it to describe interaction in a way that can be better understood by assistive technologies. For example, the aria-expanded attribute can be toggled by JavaScript to programmatically communicate if a component is in an expanded (true) or collapsed (false) state. This is superior to toggling a CSS class like .is-expanded, where the update in state is only communicated visually.

Just having the presence of ARIA does not guarantee that it will automatically make something accessible. Unfortunately, and in spite of its first rule of use, ARIA is commonly misunderstood, and consequently abused. A lot of off-the-shelf code has this problem, perpetuating the issue.

For example, certain ARIA attributes and values can only be applied to certain elements. If incorrectly applied, assistive technology will ignore or misreport the declaration. Certain roles, known as Abstract Roles, only exist to set up the overall taxonomy and should never be placed in markup.

<button role="command">Save</button>

<!-- Never do this -->

To further complicate the issue, support for ARIA is varied across browsers. While an attribute may be used appropriately, the browser may not communicate the declared role, property, or state to assistive technology.

There is also the scenario where ARIA can be applied to an element and be valid from a technical standpoint, yet be unusable from an assistive technology perspective. For example:

<h1 aria-hidden=“true”>
  Tired of unevenly cooked asparagus? Try this tip from the world’s oldest cookbook.
</h1>

This one Weird Trick.

The aria-hidden declaration will remove the presence of content from assistive technology, yet allow it to be still rendered visibly on the page. It’s a problematic pattern.

Headings — especially first-level headings — are vital in communicating the purpose of a page. If a person is using assistive technology to navigate, the aria-hidden declaration applied to the h1 element will make it difficult for them to quickly determine the page’s purpose. It will force them to navigate around the rest of the page to gain context, an annoying and labor-intensive process.

Some automated accessibility tests may scan the code and not report an error since the syntax itself is valid. The automation has no way of knowing the greater context of the declaration’s use.

This isn’t to say you should completely avoid using ARIA! When authored with care and deliberation, ARIA can fix the gaps in accessibility that sometimes plague complicated interactions; it provides some much-needed context to the people who rely on assistive technology.

Much-Needed Context

As the soggy car demonstrates, computers are awful at understanding the overall situation of the outside world. It’s up to us humans to be the ultimate arbiters in determining if what the computer spits out is useful or not.

Debunking

Before we discuss how to provide appropriate context, there are a few common misunderstandings about accessibility work that need to be addressed:

First, not all screen reader users are blind. In addition to all the points Adrian Roselli outlines in his post, some food for thought: the use of voice assistants is on the rise. When’s the last time you spoke to Siri or Alexa?

Second, accessibility is more than just screen readers. The rules outlined in the Web Content Accessibility Guidelines ensure that the largest number of people can read and operate technology, regardless of ability or circumstance.

For example, the rule that stipulates a website or web app needs to be able to work regardless of device orientation benefits everyone. Some people may need to mount their device in a fixed location in a specific orientation, such as in landscape mode on the arm of a wheelchair. Others might want to lie in bed and watch a movie, or better investigate a product photo (pinch and pull zooming will also be helpful to have here).

Third, disabilities can be conditional and can be brought about by your environment. It can be a short-term thing, like rain on your glasses, sleep deprivation, or an allergies-induced migraine. It can also be longer-term, such as a debilitating illness, broken limb, or a depressive episode. Multiple, compounding conditions can (and do) affect individuals.

That all being said, many accessibility fixes that help screen readers work properly also benefit other assistive technologies.

Get Your Feet Wet

Knowing where to begin can be overwhelming. Consider Michiel Bijl’s great advice:

“Before you release a website, tab through it. If you cannot see where you are on the page after each tab; you're not finished yet. #a11y

Tab through a few of the main user flows on your website or web app to determine if all interactive components’ focus states are visually apparent, and if they can be activated via keyboard input. If there’s something you can click or tap on that isn’t getting highlighted when receiving keyboard focus, take note of it. Also pay attention to the order interactive components are highlighted when focused — it should match the reading order of the site.

An obvious focus state and logical tab order go a great way to helping make your site accessible. These two features benefit a wide variety of assistive technology, including, but not limited to, screen readers.

If you need a baseline to compare your testing to, Dave Rupert has an excellent project called A11Y Nutrition Cards, which outlines expected behavior for common interactive components. In addition, Scott O’Hara maintains a project called a11y Styled Form Controls. This project provides examples of components such as switches, checkboxes, and radio buttons that have well-tested and documented support for assistive technology. A clever reader might use one of these resources to help them try out the other!


A screenshot of homepage for the a11y Styled Form Controls website placed over a screenshot of the Nutrition Cards for Accessible Components website.


(Large preview)

The Fourth Myth

With that out of the way, I’m going to share a fourth myth with you: not every assistive technology user is a power user. Like with any other piece of software, there’s a learning curve involved.

In her post about Aaptiv’s redesign, Lisa Zhu discovers that their initial accessibility fix wasn’t intuitive. While their first implementation was “technically” correct, it didn’t line up with how people who rely on VoiceOver actually use their devices. A second solution simplified the interaction to better align with their expectations.

Don’t assume that just because something hypothetically functions that it’s actually usable. Trust your gut: if it feels especially awkward, cumbersome, or tedious to operate for you, chances are it’ll be for others.

Dive Right In

While not every accessibility issue is a screen reader issue, you should still get in the habit of testing your site with one. Not an emulator, simulator, or some other proxy solution.

If you find yourself struggling to operate a complicated interactive component using basic screen reader commands, it’s probably a sign that the component needs to be simplified. Chances are that the simplification will help non-assistive technology users as well. Good design benefits everyone!

The same goes for navigation. If it’s difficult to move around the website or web app, it’s probably a sign that you need to update your heading structure and landmark roles. Both of these features are used by assistive technology to quickly and efficiently navigate.


Two code examples for a sidebar. One uses a div element, while the others uses an aside element. Both have the class of sidebar applied to them, with a subheading of Recent Posts.


Both of these are sidebars, but only one of them is semantically described as such. A computer doesn’t know what a sidebar is, so it’s up to you to tell it.

Another good thing to review is the text content used to describe your links. Hopping from link to link is another common assistive technology navigation technique; some screen readers can even generate a list of all link content on the page:

“Think before you link! Your "helpful" click here links look like this to a screen reader user. ALT = JAWS links list”

Tweet by Neil Milliken

Neil Milliken

When navigating using an ordered list devoid of the surrounding non-link content, avoiding ambiguous terms like “click here” or “more info” can go a long way to ensuring a person can understand the overall meaning of the page. As a bonus, it’ll help alleviate cognitive concerns for everyone, as you are more accurately explaining what a user should expect after activating a link.

How To Test

Each screen reader has a different approach to how it announces content. This is intentional. It’s a balancing act between the product’s features, the operating system it is installed on, the form factor it is available in, and the types of input it can receive.

The Browser Wars taught us the folly of developing for only one browser. Similarly, we should not cater to a single screen reader. It is important to note that many people rely exclusively on a specific screen reader and browser combination — by circumstance, preference, or necessity’making this all the more important. However, there is a caveat: each screen reader works better when used with a specific browser, typically the one that allows it access to the greatest amount of accessibility API information.

All of these screen readers can be used for free, provided you have the hardware. You can also virtualize that hardware, either for free or on the cheap.

Automate

Automated accessibility tests should be your first line of defense. They will help you catch a great deal of nitpicky, easily-preventable errors before they get committed. Repeated errors may also signal problems in template logic, where one upstream tweak can fix multiple pages. Identifying and resolving these issues allows you to spend your valuable manual testing time much more wisely.

It may also be helpful to log accessibility issues in a place where people can collaborate, such as Google Sheets. Quantifying the frequency and severity of errors can lead to good things like updated documentation, opportunities for lunch and learn education, and other healthy changes to organizational workflow.

Much like manual testing with a variety of screen readers, it is recommended that you use a combination of automated tools to prevent gaps.

Windows

The two most popular screen readers on Windows are JAWS and NVDA.

JAWS

JAWS (Job Access With Speech) is the most popular and feature-rich screen reader on the market. It works best with Firefox and Chrome, with concessions for supporting Internet Explorer. Although it is pay software, it can be operated in full in demo mode for 40 minutes at a time (this should be more than sufficient to perform basic testing).

NVDA

NVDA (NonVisual Desktop Access) is free, although a donation is strongly encouraged. It is a feature-rich alternative to JAWS. It works best with Firefox.

Narrator

Windows comes bundled with a built-in screen reader called Narrator. It works well with Edge, but has difficulty interfacing with other browsers.

Apple

macOS

VoiceOver is a powerful screen reader that comes bundled with macOS. Use it in conjunction with Safari, first making sure that full keyboard access is enabled.

iOS

VoiceOver is also included in iOS, and is the most popular mobile screen reader. Much like its desktop counterpart, it works best with Safari. An interesting note here is that according to the 2017 WebAIM screen reader survey, a not-insignificant amount of respondents augment their phone with external hardware keyboards.

Android

Google recently folded TalkBack, their mobile screen reader, into a larger collection of accessibility services called the Android Accessibility Suite. It works best with Mobile Chrome. While many Android apps are notoriously inaccessible, it is still worth testing on this platform. Android’s growing presence in emerging markets, as well as increasing internet use amongst elderly and lower-income demographics, should give pause for consideration.

Popular screen readers
Screen Reader Platform Preferred Browser(s) Manual Launch Quit
JAWS Windows Chrome, Firefox JAWS 2018 Documentation Launch JAWS as you would any other Windows application Insert + F4
NVDA Windows Firefox NVDA 2018.2.1 User Guide Ctrl + Alt + N Insert + Q
Narrator Windows Edge Get started with Narrator Windows key + Control + Enter Windows key + Control + Enter
VoiceOver macOS Safari VoiceOver Getting Started Guide Command + F5 or tap the Touch ID button 3 times Command + F5 or tap the Touch ID button 3 times
Mobile VoiceOver iOS Mobile Safari VoiceOver overview – iPhone User Guide Tell Siri to, “Turn on VoiceOver.” or activate in Settings Tell Siri to, “Turn off VoiceOver.” or deactivate in Settings
Android Accessibility Suite Android Mobile Chrome Get started on Android with TalkBack Press both volume keys for 3 seconds Press both volume keys for 3 seconds

Call The Professionals

If you do not require the use of assistive technology on a frequent basis then you do not fully understand how the people who do interact with the web.

Much like traditional user testing, being too close to the thing you created may cloud your judgment. Empathy exercises are a good way to become aware of the problem space, but you should not use yourself as a litmus test for whether the entire experience is truly accessible. You are not the expert.

If your product serves a huge population of users, if its core base of users trends towards having a higher probability of disability conditions (specialized product, elderly populations, foreign language speakers, etc.), and/or if it is required to be compliant by law, I would strongly encourage allocating a portion of your budget for testing by people with disabilities.

“At what point does your organisation stop supporting a browser in terms of % usage? 18% of the global pop. have an #Accessibility requirement, 2% people have a colour vision deficient. But you consider 2% IE usage support more important? Support everyone be inclusive.”

Mark Wilcock

This isn’t to say you should completely delegate the responsibility to these testers. Much as how automated accessibility testing can detect smaller issues to remove, a first round of basic manual testing helps professional testers focus their efforts on the complicated interactions you need an expert’s opinion on. In addition to optimizing the value of their time, it helps to get you more comfortable triaging. It is also a professional courtesy, plain and simple.

There are a few companies that perform manual testing by people with disabilities:

Designed Experiences

We also need to acknowledge the other large barrier to accessible sites that can’t be automated away: poor user experience.

User experience can make or break a product. Your code can compile perfectly, your time to first paint can be lightning quick, and your Webpack setup can be beyond reproach. All this is irrelevant if the end result is unusable. User experience encompasses all users, including those who navigate with the aid of assistive technology.

If a person cannot operate your website or web app, they’ll abandon it and not think twice. If they are forced to use your site to get a service unavailable by other means, there’s a growing precedent for taking legal action (and rightly so).

As a discipline, user experience can be roughly divided into two parts: how something looks and how it behaves They’re intrinsically interlinked concepts — work on either may affect both. While accessible design is a topic unto itself, there are some big-picture things we can keep in mind when approaching accessible user experiences from a testing perspective:

How It Looks

The WCAG does a great job covering a lot of the basics of good design. Color contrast, font size, user-facing state: a lot of these things can be targeted by automation. What you should pay attention to is all the atomic, difficult to quantify bits that compound to create your designs. Things like the words you choose, the fonts you use to display them, the spacing between things, affordances for interaction, the way you handle your breakpoints, etc.

“A good font should tell you:
the difference between m and rn
the difference between I and l
the difference between O and 0.”

mallory, alice & bob

It’s one of those “an ounce of prevention is worth a pound of cure” situations. Smart, accessible defaults can save countless time and money down the line. Lean and mean startups all the way up to multinational conglomerates value efficient use of resources, and this is one of those places where you can really capitalize on that. Put your basic design patterns — say collected in something like a mood board or living style guide — in front of people early and often to see if your designed intent is clear.

How It Behaves

An enticing color palette and collection of thoughtfully-curated stock photography only go so far. Eventually, you’re going to have to synthesize all your design decisions to create something that addresses a need.

Behavior can be as small as a microinteraction, or as large as finding a product and purchasing it. What’s important here is to make sure that all the barriers to a person trying to accomplish the task at hand are removed.

If you’re using personas, don’t create a separate persona for a user with a disability. Instead, blend accessibility considerations into your existing ones. As a persona is an abstracted representation of the types of users you want to cater to, you want to make sure the kinds of conditions they may be experiencing are included. Disability conditions aren’t limited to just physical impairments, either. Things like a metered data plan, non-native language, or anxiety are all worth integrating.

“When looking at your site's analytics, remember that if you don't see many users on lower end phones or from more remote areas, it's not because they aren't a target for your product or service. It is because your mobile experience sucks.
As a developer, it's your job to fix it.”

Estelle Weyl

User testing, ideally simulating conditions as close to what a person would be doing in the real world (including their individual device preferences and presence of assistive technology), is also key. Verifying that people are actually able to make the logical leaps necessary to operate your interface addresses a lot of cognitive concerns, a difficult-to-quantify yet vital thing to accommodate.

We Shape Our Tools, Our Tools Shape Us

Our tool use corresponds to the kind of work we do: Carpenters drive nails with hammers, chefs cook using skillets, surgeons cut with scalpels. It’s a self-reinforcing phenomenon, and it tends to lead to over-categorization.

Sometimes this over-categorization gets in the way of us remembering to consider the real world. A surgeon might have a carpentry hobby; a chef might be a retired veterinarian. It’s important to understand that accessibility is everyone’s responsibility, and there are many paths to making our websites and web apps the best they can be for everyone. To paraphrase Mikey Ilagan, accessibility is a holistic practice, essential to some but useful to all.

Used with discretion, ARIA is a very good tool to have at our disposal. We shouldn’t shy away from using it, provided we understand the how and why behind why they work.

The same goes for automated accessibility tests, as well as GPS apps. They’re great tools to have, just get to know the terrain a little bit first.

Resources

Automated Accessibility Tools

Professional Services

References

Quick Tests

Further Reading

Smashing Editorial
(rb, ra, yk, il)


Read the article: 

The Importance Of Manual Accessibility Testing

The Rise Of Intelligent Conversational UI




The Rise Of Intelligent Conversational UI

Burke Holland



For a long time, we’ve thought of interfaces strictly in a visual sense: buttons, dropdown lists, sliders, carousels (please no more carousels). But now we are staring into a future composed not just of visual interfaces, but of conversational ones as well. Microsoft alone reports that three thousand new bots are built every week on their bot framework. Every. Week.

The importance of Conversational UI cannot be understated, even if some of us wish it wasn’t happening.

The most important advancement in Conversational UI has been Natural Language Processing (NLP). This is the field of computing that deals not with deciphering the exact words that a user said, but with parsing out of it their actual intent. If the bot is the interface, NLP is the brain. In this article, we’re going to take a look at why NLP is so important, and how you (yes, you!) can build your own.

Speech Recognition vs. NLP

Most people will be familiar with Amazon Echo, Cortana, Siri or Google Home, all of which have an interface that is primarily conversational. They are also all using NLP.




Large preview

Aside from these intelligent assistants, most Conversational UIs have nothing to do with voice at all. They are text driven. These are the bots we chat with in Slack, Facebook Messenger or over SMS. They deliver high quality gifs in our chats, watch our build processes and even manage our pull requests.




Large preview

Conversational UIs built on text are nice because there is no speech recognition component. The text is already parsed.

When it comes to a verbal interaction, the fundamental problem is not recognizing the speech. We’ve mostly got that one down.

OK, so maybe it’s not perfect. I still get voicemails every day like a game of Mad Libs that I never asked to play. iOS just sticks a blank line in whenever they don’t know what exactly was said.




Large preview

Google, on the other hand, just tries to guess. Like this one from my father. I have absolutely no idea what this message is actually trying to say other than “Be Safe” which honestly sounds like my mom, and not my dad. I have a hard time believing he ever said that. I don’t trust the computer.




Large preview

I’m picking on voice mail transcriptions here, which might be the hardest speech recognition to do given how degraded the audio quality is.

Nevertheless, speech recognition is largely a solved problem. It’s even built right into Chrome and it works remarkably well.




Large preview

After we solved the problem of speech recognition, we started to use it everywhere. That was unfortunate because speech recognition on it’s own doesn’t do us a whole lot of good. Interfaces that rely soley on speech recognition require the user to state things a precise way and they can only state the limited number of exact words or phrases that the interface knows about. This is not natural. This is not how a conversation works.

Without NLP, Conversational UI can be true nightmare.

Conversational UI Without NLP

We’re probably all familiar with automated phone menus. These are known as Interactive Voice Response systems — or IVRs for short. They are designed to take the place of the traditional operator and automatically transfer callers to the right place without having to talk to a human. On the surface, this seems like a good idea. In practice, it’s mostly just you waiting while a recorded voice reads out a list of menu items that “may have changed.”




Large preview

A 2011 study from New York University found that 83% of people feel IVR systems “provide either no benefit at all, or only a cost savings benefit to the company.” They also noted that IVR systems “score lower than any other service option.” People would literally rather do anything else than use an automated phone menu.

NLP has changed the IVR market rather significantly in the past few years. NLP can pick a user’s intent out of anything they say, so it’s better to just let them say it and then determine if you support the action.

Check out how AT&T does it.

AT&T has a truly intelligent Conversational UI. It uses NLP to let me just state my intent. Also, notice that I don’t have to know what to say. I can fumble all around and it still picks out my intent.

AT&T also uses information that it already has (my phone number) and then leverages text messaging to send me a link to a traditional visual UI, which is probably a much better UX for making a payment. NLP drives the whole experience here. Without it, the rest of the interaction would not be nearly as smooth.

NLP is powerful, but more importantly, it is also accessible to developers everywhere. You don’t have to know a thing about Machine Learning (ML) or Artificial Intelligence (AI) to use it. All you need to how to do is make an AJAX call. Even I can do that!

Building An NLP Interface

So much of Machine Learning still remains inaccessible to developers. Even the best YouTube videos on the subject quickly become hard to follow with subjects like Neural Networks and Gradient Descents. We have, however, made significant progress in the field of Language Processing, to the point that it’s accessible to developers of nearly any skill level.

Natural Language Processing differs based on the service, but the overall idea is that the user has an intent, and that intent contains entities. That means exactly nothing to you at the moment, so let’s work up a hypothetical Home Automation bot and see how this works.

The Home Automation Example

In the field of Natural Language Processing, the canonical “Hello World” is usually a Home Automation demo. This is because it helps to clearly demonstrate the fundamental concepts of NLP without overloading your brain.

A Home Automation Bot is a service that can control hypothetical lights in a hypothetical house. For instance, we might want to say “Turn on the kitchen lights”. That is our intent. If we said “Hello”, we are clearly expressing a different intent. Inside of that intent, there are two pieces of information that we need to complete the action:

  1. The ‘Location’ of the light (kitchen)
  2. The desired state of the lights ‘Power’ (on/off)

These (Location, Power) are known as entities.

When we are finished designing our NLP interface, we are going to be able to call an HTTP endpoint and pass it our intent: “Turn on the kitchen lights.” That endpoint will return to us the intent (Control Lights) and two objects representing our entities: Location and Power. We can then pass those into a function which actually controls our lights…

function controlLights(location, power) 
  console.log(`Turning $power the $location lights`);
  
  // TODO: Call an imaginary endpoint which controls lights   
}

There are a lot of NLP services out there that are available today for developers. For this example, I’m going to show the LUIS project from Microsoft because it is free to use.

LUIS is a completely visual tool, so we won’t actually be writing any code at all. We’ve already talked about Intents and Entities, so you already know most of the terminology that you need to know to build this interface.

The first step is to create a “Control Lights” intent in LUIS.




Large preview

Before I do anything with that intent, I need to define my Location and Power entities. Entities can be different types — kind of like types in a programming language. You can have dates, lists and even entities that are related to other entities. In this case, Power is a list of values (on, off) and Location is a simple entity, which can be any value.

It will be up to LUIS to be smart enough to figure out exactly what the Location is.




Large preview




Large preview

Now we can begin to train this model to understand all of the different ways that we might ask it to control the lights in a different location. Let’s think of all the different ways that we could do that:

  • Turn off the kitchen lights;
  • Turn off the lights in the office;
  • The lights in the living room, turn them on;
  • Lights, kitchen, off;
  • Turn off the lights (no location).

As I feed these into the Control Lights intent as utterances, LUIS tries to determine where in the intent the entities are. You can see that because Power is a discreet list of values, it gets that right every time.




Large preview

But it has no idea what a Location even is. LUIS wants us to go through this list and tell it where the Location is. That’s done by clicking on a word or group of words and assigning to the right entity. As we are doing this, we are really creating a machine learning model that LUIS is going to use to statistically estimate what qualifies as a Location.




Large preview

When I’m done telling LUIS where in these utterances all the locations are, my dashboard looks like this…




Large preview

Now we train the model by clicking on the “Train” button at the top. Do you feel like a data scientist yet?

Now I can test it using the test panel. You can see that LUIS is already pretty smart. The Power is easy to pick out, but it can actually pick out Locations it has never seen before. It’s doing what your brain does — using the information that it has to make an educated guess. Machine Learning is equal parts impressive and scary.




Large preview

If we try hard enough, we can fool the AI. The more utterances we give it and label, the smarter it will get. I added 35 utterances to mine before I was done and it is close to bullet proof.

So now we get to the important part, which is how we actually use this NLP in an app. LUIS has a “Publish” menu option which allows us to publish our model to the internet where it’s exposed via a single HTTP endpoint. It will look something like this…

https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/c4396135-ee3f-40a9-8b83-4704cddabf7a?subscription-key=19d29a12d3fc4d9084146b466638e62a&verbose=true&timezoneOffset=0&q=

The very last part of that query string is a q= variable. This is where we would pass our intent.

https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/c4396135-ee3f-40a9-8b83-4704cddabf7a?subscription-key=19d29a12d3fc4d9084146b466638e62a&verbose=true&timezoneOffset=0&q=turn on the kitchen lights

The response that we get back looks is just a JSON object.


  "query": "turn on the kitchen lights",
  "topScoringIntent": 
    "intent": "Control Lights",
    "score": 0.999999046
  ,
  "intents": [
    
      "intent": "Control Lights",
      "score": 0.999999046
    ,
    
      "intent": "None",
      "score": 0.0532306843
    
  ],
  "entities": [
    
      "entity": "kitchen",
      "type": "Location",
      "startIndex": 12,
      "endIndex": 18,
      "score": 0.9516622
    ,
    
      "entity": "on",
      "type": "Power",
      "startIndex": 5,
      "endIndex": 6,
      "resolution": 
        "values": [
          "on"
        ]
      
    }
  ]
}

Now this is something that we can work with as developers! This is how you add NLP to any project — with a single REST endpoint. Now you’re free to create a bot with some real brains!

Brian Holt used the browser speech API and a LUIS model to create a voice powered calculator that is running right inside of CodePen. Chrome is required for the speech API.

See the Pen Voice Calculator by Brian Holt (@btholt) on CodePen.

Bot Design Is Still Hard

Having a smart bot is only half the battle. We still need to account for any of the actions that our system might expose, and that can lead to a lot of different logical paths which makes for messy code.

Conversations also happen in stages, so the bot needs to be able to intelligently direct users down the right path without frustrating them or being unable to recover when something goes wrong. It needs to be able to recover when the conversation dies midstream and then starts again. That’s a whole other article and I’ve included some resources below to help.

When it comes to language understanding, the AI platforms are mature and ready to use today. While that won’t help you perfectly design your bot, it will be a key component to building a bot that people don’t hate.

Great UI Is Just Great UI

A final note: As we saw from the AT&T example, a truly smart interface combines great speech recognition, Natural Language Processing, different types of conversational UI (speech and text) and even a visual UI. In short, great UI is just that — great UI — and it is not a zero sum game. Great UIs will leverage all of the technology available to provide the best possible user experience.

Special thanks to Mat Velloso for his input on this article.

Further Resources:

Smashing Editorial
(rb, ra, yk, il)


Visit site: 

The Rise Of Intelligent Conversational UI

Learn from the Best: an Interview with Content Marketing Rock Star Andy Crestodina

I first had the pleasure of working with Andy on a Kissmetrics blog post five years ago. A few months after his post was published, I looked at our traffic in Google Analytics and said: Whatever Andy touches becomes magic. The electric sparks that shoot off his finger tips as he types turn into thousands of social shares, ten of thousands of pageviews, and more importantly – unbelievable wisdom that his readers consume. Let’s get inside his head for a moment and learn a few new things! 1. In the current state of inbound marketing, are people getting suffocated by…

The post Learn from the Best: an Interview with Content Marketing Rock Star Andy Crestodina appeared first on The Daily Egg.

From:  

Learn from the Best: an Interview with Content Marketing Rock Star Andy Crestodina

How To Write Content Without…Well…Having To Write It

how to write content without writing it

Early morning, dew still fresh on the grass, sunrise beautifully lighting a home office, aromatic steam rising from hot coffee, fog melting away from the world outside. And — unfortunately — the first flashes of carpal tunnel pain go pulsing through your wrist. Indeed, the consequences of a repeated routine can manifest themselves in painful ways. Furthermore, doing something in only one way limits personal growth. My suggestion? Maybe it’s time to stop exclusively typing all your content and start dictating some of it. Dictation is a vastly underused feature across all PC and smartphone platforms. Lawyers have been known…

The post How To Write Content Without…Well…Having To Write It appeared first on The Daily Egg.

View original article:  

How To Write Content Without…Well…Having To Write It

Be A Better Designer By Eating An Elephant

I can’t imagine any other industry in which so much change happens so quickly. If you stop paying attention for a week, it can feel like you’ve not been listening for a year. There’s so much to learn.
Falling behind is easy, too. We might be in the middle of a major project, so we put off learning about this newfangled thing called Sass or Node.js or even quickly experimenting with the new Bootstrap or Foundation that everyone is raving about.

Read more:

Be A Better Designer By Eating An Elephant

Taming The Email Beast

In the 1950s, when consumer electronics such as vacuum cleaners and washing machines emerged, there was a belief that household chores would be done in a fraction of the time.
We know now it didn’t work out that way. Our definition of clean changed. Instead of wearing underwear for multiple days, we started using a fresh pair every day, and so the amount of washing required increased. In short, technology enabled us to do more, not less.

Read this article: 

Taming The Email Beast

5 Ways To Prevent Bad Microcopy

You’ve just created the best user experience ever. You had the idea. You sketched it out. You started to build it. Except you’re already in trouble, because you’ve forgotten something: the copy. Specifically, the microcopy.
Microcopy is the text we don’t talk about very often. It’s the label on a form field, a tiny piece of instructional text, or the words on a button. It’s the little text that can make or break your user experience.

Follow this link – 

5 Ways To Prevent Bad Microcopy

How To Add A Personal Touch To Your Web Design

The Web is technical by nature. Different scripts and pieces of code are linked together through hyperlinks, forming an endless net of interwoven, encrypted information — data that is accessible only through technical interfaces, such as Web browsers, or applications. Yet, Web professionals have made it their calling to tame the “wild” Web and turn it into an accessible, user-friendly and, most of all, personal medium. Personality will set your brand apart from competitors and help you connect with a passionate audience.

Visit site:  

How To Add A Personal Touch To Your Web Design

Guidelines For Designing With Audio

As we’ve seen, audio is used as a feedback mechanism when users interact with many of their everyday devices, such as mobile phones, cars, toys and robots. There are many subtleties to designing with audio in order to create useful, non-intrusive experiences. Here, we’ll explore some guidelines and principles to consider when designing with audio.
While I won’t cover this here, audio is a powerful tool for designing experiences for accessibility, and many of the guidelines discussed here apply.

Visit site: 

Guidelines For Designing With Audio

When Typography Speaks Louder Than Words

Clever graphic designers love to use typography to explore the interaction between the look of type and what type actually says. In communicating a message, a balance has to be achieved between the visual and the verbal aspects of a design.
Sometimes, however, designers explore the visual aspect of type to a much greater extent than the verbal. In these cases, the visual language does all the talking. This article explores when the visual elements of typography speak louder than words.

Visit site:  

When Typography Speaks Louder Than Words