Tag Archives: structure

Designing For Accessibility And Inclusion




Designing For Accessibility And Inclusion

Steven Lambert



“Accessibility is solved at the design stage.” This is a phrase that Daniel Na and his team heard over and over again while attending a conference. To design for accessibility means to be inclusive to the needs of your users. This includes your target users, users outside of your target demographic, users with disabilities, and even users from different cultures and countries. Understanding those needs is the key to crafting better and more accessible experiences for them.

One of the most common problems when designing for accessibility is knowing what needs you should design for. It’s not that we intentionally design to exclude users, it’s just that “we don’t know what we don’t know.” So, when it comes to accessibility, there’s a lot to know.

How do we go about understanding the myriad of users and their needs? How can we ensure that their needs are met in our design? To answer these questions, I have found that it is helpful to apply a critical analysis technique of viewing a design through different lenses.

“Good [accessible] design happens when you view your [design] from many different perspectives, or lenses.”

The Art of Game Design: A Book of Lenses

A lens is “a narrowed filter through which a topic can be considered or examined.” Often used to examine works of art, literature, or film, lenses ask us to leave behind our worldview and instead view the world through a different context.

For example, viewing art through a lens of history asks us to understand the “social, political, economic, cultural, and/or intellectual climate of the time.” This allows us to better understand what world influences affected the artist and how that shaped the artwork and its message.

Accessibility lenses are a filter that we can use to understand how different aspects of the design affect the needs of the users. Each lens presents a set of questions to ask yourself throughout the design process. By using these lenses, you will become more inclusive to the needs of your users, allowing you to design a more accessible user experience for all.

The Lenses of Accessibility are:

You should know that not every lens will apply to every design. While some can apply to every design, others are more situational. What works best in one design may not work for another.

The questions provided by each lens are merely a tool to help you understand what problems may arise. As always, you should test your design with users to ensure it’s usable and accessible to them.

Lens Of Animation And Effects

Effective animations can help bring a page and brand to life, guide the users focus, and help orient a user. But animations are a double-edged sword. Not only can misusing animations cause confusion or be distracting, but they can also be potentially deadly for some users.

Fast flashing effects (defined as flashing more than three times a second) or high-intensity effects and patterns can cause seizures, known as ‘photosensitive epilepsy.’ Photosensitivity can also cause headaches, nausea, and dizziness. Users with photosensitive epilepsy have to be very careful when using the web as they never know when something might cause a seizure.

Other effects, such as parallax or motion effects, can cause some users to feel dizzy or experience vertigo due to vestibular sensitivity. The vestibular system controls a person’s balance and sense of motion. When this system doesn’t function as it should, it causes dizziness and nausea.

“Imagine a world where your internal gyroscope is not working properly. Very similar to being intoxicated, things seem to move of their own accord, your feet never quite seem to be stable underneath you, and your senses are moving faster or slower than your body.”

A Primer To Vestibular Disorders

Constant animations or motion can also be distracting to users, especially to users who have difficulty concentrating. GIFs are notably problematic as our eyes are drawn towards movement, making it easy to be distracted by anything that updates or moves constantly.

This isn’t to say that animation is bad and you shouldn’t use it. Instead you should understand why you’re using the animation and how to design safer animations. Generally speaking, you should try to design animations that cover small distances, match direction and speed of other moving objects (including scroll), and are relatively small to the screen size.

You should also provide controls or options to cater the experience for the user. For example, Slack lets you hide animated images or emojis as both a global setting and on a per image basis.

To use the Lens of Animation and Effects, ask yourself these questions:

  • Are there any effects that could cause a seizure?
  • Are there any animations or effects that could cause dizziness or vertigo through use of motion?
  • Are there any animations that could be distracting by constantly moving, blinking, or auto-updating?
  • Is it possible to provide controls or options to stop, pause, hide, or change the frequency of any animations or effects?

Lens Of Audio And Video

Autoplaying videos and audio can be pretty annoying. Not only do they break a users concentration, but they also force the user to hunt down the offending media and mute or stop it. As a general rule, don’t autoplay media.

“Use autoplay sparingly. Autoplay can be a powerful engagement tool, but it can also annoy users if undesired sound is played or they perceive unnecessary resource usage (e.g. data, battery) as the result of unwanted video playback.”

Google Autoplay guidelines

You’re now probably asking, “But what if I autoplay the video in the background but keep it muted?” While using videos as backgrounds may be a growing trend in today’s web design, background videos suffer from the same problems as GIFs and constant moving animations: they can be distracting. As such, you should provide controls or options to pause or disable the video.

Along with controls, videos should have transcripts and/or subtitles so users can consume the content in a way that works best for them. Users who are visually impaired or who would rather read instead of watch the video need a transcript, while users who aren’t able to or don’t want to listen to the video need subtitles.

To use the Lens of Audio and Video, ask yourself these questions:

  • Are there any audio or video that could be annoying by autoplaying?
  • Is it possible to provide controls to stop, pause, or hide any audio or videos that autoplay?
  • Do videos have transcripts and/or subtitles?

Lens Of Color

Color plays an important part in a design. Colors evoke emotions, feelings, and ideas. Colors can also help strengthen a brand’s message and perception. Yet the power of colors is lost when a user can’t see them or perceives them differently.

Color blindness affects roughly 1 in 12 men and 1 in 200 women. Deuteranopia (red-green color blindness) is the most common form of color blindness, affecting about 6% of men. Users with red-green color blindness typically perceive reds, greens, and oranges as yellowish.


Color Blindness Reference Chart for Deuternaopia, Protanopia, and Tritanopia


Deuteranopia (green color blindness) is common and causes reds to appear brown/yellow and greens to appear beige. Protanopia (red color blindness) is rare and causes reds to appear dark/black and orange/greens to appear yellow. Tritanopia (blue-yellow colorblindness) is very rare and cases blues to appear more green/teal and yellows to appear violet/grey. (Source) (Large preview)

Color meaning is also problematic for international users. Colors mean different things in different countries and cultures. In Western cultures, red is typically used to represent negative trends and green positive trends, but the opposite is true in Eastern and Asian cultures.

Because colors and their meanings can be lost either through cultural differences or color blindness, you should always add a non-color identifier. Identifiers such as icons or text descriptions can help bridge cultural differences while patterns work well to distinguish between colors.


Six colored labels. Five use a pattern while the sixth doesn’t


Trello’s color blind friendly labels use different patterns to distinguish between the colors. (Large preview)

Oversaturated colors, high contrasting colors, and even just the color yellow can be uncomfortable and unsettling for some users, prominently those on the autism spectrum. It’s best to avoid high concentrations of these types of colors to help users remain comfortable.

Poor contrast between foreground and background colors make it harder to see for users with low vision, using a low-end monitor, or who are just in direct sunlight. All text, icons, and any focus indicators used for users using a keyboard should meet a minimum contrast ratio of 4.5:1 to the background color.

You should also ensure your design and colors work well in different settings of Windows High Contrast mode. A common pitfall is that text becomes invisible on certain high contrast mode backgrounds.

To use the Lens of Color, ask yourself these questions:

  • If the color was removed from the design, what meaning would be lost?
  • How could I provide meaning without using color?
  • Are any colors oversaturated or have high contrast that could cause users to become overstimulated or uncomfortable?
  • Does the foreground and background color of all text, icons, and focus indicators meet contrast ratio guidelines of 4.5:1?

Lens Of Controls

Controls, also called ‘interactive content,’ are any UI elements that the user can interact with, be they buttons, links, inputs, or any HTML element with an event listener. Controls that are too small or too close together can cause lots of problems for users.

Small controls are hard to click on for users who are unable to be accurate with a pointer, such as those with tremors, or those who suffer from reduced dexterity due to age. The default size of checkboxes and radio buttons, for example, can pose problems for older users. Even when a label is provided that could be clicked on instead, not all users know they can do so.

Controls that are too close together can cause problems for touch screen users. Fingers are big and difficult to be precise with. Accidentally touching the wrong control can cause frustration, especially if that control navigates you away or makes you lose your context.


Tweet that says Software being Done is like lawn being Mowed. Jim Benson


When touching a single line tweet, it’s very easy to accidentally click the person’s name or handle instead of opening the tweet because there’s not enough space between them. (Source) (Large preview)

Controls that are nested inside another control can also contribute to touch errors. Not only is it not allowed in the HTML spec, it also makes it easy to accidentally select the parent control instead of the one you wanted.

To give users enough room to accurately select a control, the recommended minimum size for a control is 34 by 34 device independent pixels, but Google recommends at least 48 by 48 pixels, while the WCAG spec recommends at least 44 by 44 pixels. This size also includes any padding the control has. So a control could visually be 24 by 24 pixels but with an additional 10 pixels of padding on all sides would bring it up to 44 by 44 pixels.

It’s also recommended that controls be placed far enough apart to reduce touch errors. Microsoft recommends at least 8 pixels of spacing while Google recommends controls be spaced at least 32 pixels apart.

Controls should also have a visible text label. Not only do screen readers require the text label to know what the control does, but it’s been shown that text labels help all users better understand a controls purpose. This is especially important for form inputs and icons.

To use the Lens of Controls, ask yourself these questions:

  • Are any controls not large enough for someone to touch?
  • Are any controls too close together that would make it easy to touch the wrong one?
  • Are there any controls inside another control or clickable region?
  • Do all controls have a visible text label?

Lens Of Font

In the early days of the web, we designed web pages with a font size between 9 and 14 pixels. This worked out just fine back then as monitors had a relatively known screen size. We designed thinking that the browser window was a constant, something that couldn’t be changed.

Technology today is very different than it was 20 years ago. Today, browsers can be used on any device of any size, from a small watch to a huge 4K screen. We can no longer use fixed font sizes to design our sites. Font sizes must be as responsive as the design itself.

Not only should the font sizes be responsive, but the design should be flexible enough to allow users to customize the font size, line height, or letter spacing to a comfortable reading level. Many users make use of custom CSS that helps them have a better reading experience.

The font itself should be easy to read. You may be wondering if one font is more readable than another. The truth of the matter is that the font doesn’t really make a difference to readability. Instead it’s the font style that plays an important role in a fonts readability.

Decorative or cursive font styles are harder to read for many users, but especially problematic for users with dyslexia. Small font sizes, italicized text, and all uppercase text are also difficult for users. Overall, larger text, shorter line lengths, taller line heights, and increased letter spacing can help all users have a better reading experience.

To use the Lens of Font, ask yourself these questions:

  • Is the design flexible enough that the font could be modified to a comfortable reading level by the user?
  • Is the font style easy to read?

Lens Of Images and Icons

They say, “A picture is worth a thousand words.” Still, a picture you can’t see is speechless, right?

Images can be used in a design to convey a specific meaning or feeling. Other times they can be used to simplify complex ideas. Whichever the case for the image, a user who uses a screen reader needs to be told what the meaning of the image is.

As the designer, you understand best the meaning or information the image conveys. As such, you should annotate the design with this information so it’s not left out or misinterpreted later. This will be used to create the alt text for the image.

How you describe an image depends entirely on context, or how much textual information is already available that describes the information. It also depends on if the image is just for decoration, conveys meaning, or contains text.

“You almost never describe what the picture looks like, instead you explain the information the picture contains.”

Five Golden Rules for Compliant Alt Text

Since knowing how to describe an image can be difficult, there’s a handy decision tree to help when deciding. Generally speaking, if the image is decorational or there’s surrounding text that already describes the image’s information, no further information is needed. Otherwise you should describe the information of the image. If the image contains text, repeat the text in the description as well.

Descriptions should be succinct. It’s recommended to use no more than two sentences, but aim for one concise sentence when possible. This allows users to quickly understand the image without having to listen to a lengthy description.

As an example, if you were to describe this image for a screen reader, what would you say?


Vincent van Gogh’s The Starry Night


Source (Large preview)

Since we describe the information of the image and not the image itself, the description could be Vincent van Gogh’s The Starry Night since there is no other surrounding context that describes it. What you shouldn’t put is a description of the style of the painting or what the picture looks like.

If the information of the image would require a lengthy description, such as a complex chart, you shouldn’t put that description in the alt text. Instead, you should still use a short description for the alt text and then provide the long description as either a caption or link to a different page.

This way, users can still get the most important information quickly but have the ability to dig in further if they wish. If the image is of a chart, you should repeat the data of the chart just like you would for text in the image.

If the platform you are designing for allows users to upload images, you should provide a way for the user to enter the alt text along with the image. For example, Twitter allows its users to write alt text when they upload an image to a tweet.

To use the Lens of Images and Icons, ask yourself these questions:

  • Does any image contain information that would be lost if it was not viewable?
  • How could I provide the information in a non-visual way?
  • If the image is controlled by the user, is it possible to provide a way for them to enter the alt text description?

Lens Of Keyboard

Keyboard accessibility is among the most important aspects of an accessible design, yet it is also among the most overlooked.

There are many reasons why a user would use a keyboard instead of a mouse. Users who use a screen reader use the keyboard to read the page. A user with tremors may use a keyboard because it provides better accuracy than a mouse. Even power users will use a keyboard because it’s faster and more efficient.

A user using a keyboard typically uses the tab key to navigate to each control in sequence. A logical order for the tab order greatly helps users know where the next key press will take them. In western cultures, this usually means from left to right, top to bottom. Unexpected tab orders results in users becoming lost and having to scan frantically for where the focus went.

Sequential tab order also means that they must tab through all controls that are before the one that they want. If that control is tens or hundreds of keystrokes away, it can be a real pain point for the user.

By making the most important user flows nearer to the top of the tab order, we can help enable our users to be more efficient and effective. However, this isn’t always possible nor practical to do. In these cases, providing a way to quickly jump to a particular flow or content can still allow them to be efficient. This is why “skip to content” links are helpful.

A good example of this is Facebook which provides a keyboard navigation menu that allows users to jump to specific sections of the site. This greatly speeds up the ability for a user to interact with the page and the content they want.


facebook


Facebook provides a way for all keyboard users to jump to specific sections of the page, or other pages within Facebook, as well as an Accessibility Help menu. (Large preview)

When tabbing through a design, focus styles should always be visible or a user can easily become lost. Just like an unexpected tab order, not having good focus indicators results in users not knowing what is currently focused and having to scan the page.

Changing the look of the default focus indicator can sometimes improve the experience for users. A good focus indicator doesn’t rely on color alone to indicate focus (Lens of Color), and should be distinct enough to easily allow the user to find it. For example, a blue focus ring around a similarly colored blue button may not be visually distinct to discern that it is focused.

Although this lens focuses on keyboard accessibility, it’s important to note that it applies to any way a user could interact with a website without a mouse. Devices such as mouth sticks, switch access buttons, sip and puff buttons, and eye tracking software all require the page to be keyboard accessible.

By improving keyboard accessibility, you allow a wide range of users better access to your site.

To use the Lens of Keyboard, ask yourself these questions:

  • What keyboard navigation order makes the most sense for the design?
  • How could a keyboard user get to what they want in the quickest way possible?
  • Is the focus indicator always visible and visually distinct?

Lens Of Layout

Layout contributes a great deal to the usability of a site. Having a layout that is easy to follow with easy to find content makes all the difference to your users. A layout should have a meaningful and logical sequence for the user.

With the advent of CSS Grid, being able to change the layout to be more meaningful based on the available space is easier than ever. However, changing the visual layout creates problems for users who rely on the structural layout of the page.

The structural layout is what is used by screen readers and users using a keyboard. When the visual layout changes but not the underlying structural layout, these users can become confused as their tab order is no longer logical. If you must change the visual layout, you should do so by changing the structural layout so users using a keyboard maintain a sequential and logical tab order.

The layout should be resizable and flexible to a minimum of 320 pixels with no horizontal scroll bars so that it can be viewed comfortably on a phone. The layout should also be flexible enough to be zoomed in to 400% (also with no horizontal scroll bars) for users who need to increase the font size for a better reading experience.

Users using a screen magnifier benefit when related content is in close proximity to one another. A screen magnifier only provides the user with a small view of the entire layout, so content that is related but far away, or changes far away from where the interaction occurred is hard to find and can go unnoticed.

GIF of CodePen showing that clicking on a button does not update the interface
When performing a search on CodePen, the search button is in the top right corner of the page. Clicking the button reveals a large search input on the opposite side of the screen. A user using a screen magnifier would be hard pressed to notice the change and would think the button doesn’t work. (Large preview)

To use the Lens of Layout, ask yourself these questions:

  • Does the layout have a meaningful and logical sequence?
  • What should happen to the layout when it’s viewed on a small screen or zoomed in to 400%?
  • Is content that is related or changes due to user interaction in close proximity to one another?

Lens Of Material Honesty

Material honesty is an architectural design value that states that a material should be honest to itself and not be used as a substitute for another material. It means that concrete should look like concrete and not be painted or sculpted to look like bricks.

Material honesty values and celebrates the unique properties and characteristics of each material. An architect who follows material honesty knows when each material should be used and how to use it without tarnishing itself.

Material honesty is not a hard and fast rule though. It lies on a continuum. Like all values, you are allowed to break them when you understand them. As the saying goes, they are “more what you’d call “guidelines” than actual rules.”

When applied to web design, material honesty means that one element or component shouldn’t look, behave, or function as if it were another element or component. Doing so would cheat the user and could lead to confusion. A common example of this are buttons that look like links or links that look like buttons.

Links and buttons have different behaviors and affordances. A link is activated with the enter key, typically takes you to a different page, and has a special context menu on right click. Buttons are activated with the space key, used primarily to trigger interactions on the current page, and have no such context menu.

When a link is styled to look like a button or vise versa, a user could become confused as it does not behave and function as it looks. If the “button” navigates the user away unexpectedly, they might become frustrated if they lost data in the process.

“At first glance everything looks fine, but it won’t stand up to scrutiny. As soon as such a website is stress‐tested by actual usage across a range of browsers, the façade crumbles.”

Resilient Web Design

Where this becomes the most problematic is when a link and button are styled the same and are placed next to one another. As there is nothing to differentiate between the two, a user can accidentally navigate when they thought they wouldn’t.


Three links and/or buttons shown inline with text


Can you tell which one of these will navigate you away from the page and which won’t? (Large preview)

When a component behaves differently than expected, it can easily lead to problems for users using a keyboard or screen reader. An autocomplete menu that is more than an autocomplete menu is one such example.

Autocomplete is used to suggest or predict the rest of a word a user is typing. An autocomplete menu allows a user to select from a large list of options when not all options can be shown.

An autocomplete menu is typically attached to an input field and is navigated with the up and down arrow keys, keeping the focus inside the input field. When a user selects an option from the list, that option will override the text in the input field. Autocomplete menus are meant to be lists of just text.

The problem arises when an autocomplete menu starts to gain more behaviors. Not only can you select an option from the list, but you can edit it, delete it, or even expand or collapse sections. The autocomplete menu is no longer just a simple list of selectable text.




With the addition of edit, delete, and profile buttons, this autocomplete menu is materially dishonest. (Large preview)

The added behaviors no longer mean you can just use the up and down arrows to select an option. Each option now has more than one action, so a user needs to be able to traverse two dimensions instead of just one. This means that a user using a keyboard could become confused on how to operate the component.

Screen readers suffer the most from this change of behavior as there is no easy way to help them understand it. A lot of work will be required to ensure the menu is accessible to a screen reader by using non-standard means. As such, it will might result in a sub-par or inaccessible experience for them.

To avoid these issues, it’s best to be honest to the user and the design. Instead of combining two distinct behaviors (an autocomplete menu and edit and delete functionality), leave them as two separate behaviors. Use an autocomplete menu to just autocomplete the name of a user, and have a different component or page to edit and delete users.

To use the Lens of Material Honesty, ask yourself these questions:

  • Is the design being honest to the user?
  • Are there any elements that behave, look, or function as another element?
  • Are there any components that combine distinct behaviors into a single component? Does doing so make the component materially dishonest?

Lens Of Readability

Have you ever picked up a book only to get a few paragraphs or pages in and want to give up because the text was too hard to read? Hard to read content is mentally taxing and tiring.

Sentence length, paragraph length, and complexity of language all contribute to how readable the text is. Complex language can pose problems for users, especially those with cognitive disabilities or who aren’t fluent in the language.

Along with using plain and simple language, you should ensure each paragraph focuses on a single idea. A paragraph with a single idea is easier to remember and digest. The same is true of a sentence with fewer words.

Another contributor to the readability of content is the length of a line. The ideal line length is often quoted to be between 45 and 75 characters. A line that is too long causes users to lose focus and makes it harder to move to the next line correctly, while a line that is too short causes users to jump too often, causing fatigue on the eyes.

“The subconscious mind is energized when jumping to the next line. At the beginning of every new line the reader is focused, but this focus gradually wears off over the duration of the line”

— Typographie: A Manual of Design

You should also break up the content with headings, lists, or images to give mental breaks to the reader and support different learning styles. Use headings to logically group and summarize the information. Headings, links, controls, and labels should be clear and descriptive to enhance the users ability to comprehend.

To use the Lens of Readability, ask yourself these questions:

  • Is the language plain and simple?
  • Does each paragraph focus on a single idea?
  • Are there any long paragraphs or long blocks of unbroken text?
  • Are all headings, links, controls, and labels clear and descriptive?

Lens Of Structure

As mentioned in the Lens of Layout, the structural layout is what is used by screen readers and users using a keyboard. While the Lens of Layout focused on the visual layout, the Lens of Structure focuses on the structural layout, or the underlying HTML and semantics of the design.

As a designer, you may not write the structural layout of your designs. This shouldn’t stop you from thinking about how your design will ultimately be structured though. Otherwise, your design may result in an inaccessible experience for a screen reader.

Take for example a design for a single elimination tournament bracket.


Eight person tournament bracket featuring George, Fred, Linus, Lucy, Jack, Jill, Fred, and Ginger. Ginger ultimately wins against George.


Large preview

How would you know if this design was accessible to a user using a screen reader? Without understanding structure and semantics, you may not. As it stands, the design would probably result in an inaccessible experience for a user using a screen reader.

To understand why that is, we first must understand that a screen reader reads a page and its content in sequential order. This means that every name in the first column of the tournament would be read, followed by all the names in the second column, then third, then the last.

“George, Fred, Linus, Lucy, Jack, Jill, Fred, Ginger, George, Lucy, Jack, Ginger, George, Ginger, Ginger.”

If all you had was a list of seemingly random names, how would you interpret the results of the tournament? Could you say who won the tournament? Or who won game 6?

With nothing more to work with, a user using a screen reader would probably be a bit confused about the results. To be able to understand the visual design, we must provide the user with more information in the structural design.

This means that as a designer you need to know how a screen reader interacts with the HTML elements on a page so you know how to enhance their experience.

  • Landmark Elements (header, nav, main, and footer)
    Allow a screen reader to jump to important sections in the design.
  • Headings (h1h6)
    Allow a screen reader to scan the page and get a high level overview. Screen readers can also jump to any heading.
  • Lists (ul and ol)
    Group related items together, and allow a screen reader to easily jump from one item to another.
  • Buttons
    Trigger interactions on the current page.
  • Links
    Navigate or retrieve information.
  • Form labels
    Tell screen readers what each form input is.

Knowing this, how might we provide more meaning to a user using a screen reader?

To start, we could group each column of the tournament into rounds and use headings to label each round. This way, a screen reader would understand when a new round takes place.

Next, we could help the user understand which players are playing against each other each game. We can again use headings to label each game, allowing them to find any game they might be interested in.

By just adding headings, the content would read as follows:

“__Round 1, Game 1__, George, Fred, __Game 2__, Linus, Lucy, __Game 3__, Jack, Jill, __Game 4__, Fred, Ginger, __Round 2, Game 5__, George, Lucy, __Game 6__, Jack, Ginger, __Round 3__, __Game 7__, George, Ginger, __Winner__, Ginger.”

This is already a lot more understandable than before.

The information still doesn’t answer who won a game though. To know that, you’d have to understand which game a winner plays next to see who won the previous game. For example, you’d have to know that the winner of game four plays in game six to know who advanced from game four.

We can further enhance the experience by informing the user who won each game so they don’t have to go hunting for it. Putting the text “(winner)” after the person who won the round would suffice.

We should also further group the games and rounds together using lists. Lists provide the structural semantics of the design, essentially informing the user of the connected nodes from the visual design.

If we translate this back into a visual design, the result could look as follows:


The tournament bracket


The tournament with descriptive headings and winner information (shown here with grey background). (Large preview)

Since the headings and winner text are redundant in the visual design, you could hide them just from visual users so the end visual result looks just like the first design.

“If the end result is visually the same as where we started, why did we go through all this?” You may ask.

The reason is that you should always annotate your design with all the necessary structural design requirements needed for a better screen reader experience. This way, the person who implements the design knows to add them. If you had just handed the first design to the implementer, it would more than likely end up inaccessible.

To use the Lens of Structure, ask yourself these questions:

  • Can I outline a rough HTML structure of my design?
  • How can I structure the design to better help a screen reader understand the content or find the content they want?
  • How can I help the person who will implement the design understand the intended structure?

Lens Of Time

Periodically in a design you may need to limit the amount of time a user can spend on a task. Sometimes it may be for security reasons, such as a session timeout. Other times it could be due to a non-functional requirement, such as a time constrained test.

Whatever the reason, you should understand that some users may need more time in order finish the task. Some users might need more time to understand the content, others might not be able to perform the task quickly, and a lot of the time they could just have been interrupted.

“The designer should assume that people will be interrupted during their activities”

— The Design of Everyday Things

Users who need more time to perform an action should be able to adjust or remove a time limit when possible. For example, with a session timeout you could alert the user when their session is about to expire and allow them to extend it.

To use the Lens of Time, ask yourself this question:

  • Is it possible to provide controls to adjust or remove time limits?

Bringing It All Together

So now that you’ve learned about the different lenses of accessibility through which you can view your design, what do you do with them?

The lenses can be used at any point in the design process, even after the design has been shipped to your users. Just start with a few of them at hand, and one at a time carefully analyze the design through a lens.

Ask yourself the questions and see if anything should be adjusted to better meet the needs of a user. As you slowly make changes, bring in other lenses and repeat the process.

By looking through your design one lens at a time, you’ll be able to refine the experience to better meet users’ needs. As you are more inclusive to the needs of your users, you will create a more accessible design for all your users.

Using lenses and insightful questions to examine principles of accessibility was heavily influenced by Jesse Schell and his book “The Art of Game Design: A Book of Lenses.”

Smashing Editorial
(il, ra, yk)


Taken from – 

Designing For Accessibility And Inclusion

Analyzing Your Company’s Social Media Presence With IBM Watson And Node.js




Analyzing Your Company’s Social Media Presence With IBM Watson And Node.js

If you are unfamiliar with Machine Learning (ML) technology, it has existed in science fiction for many years and is finally reaching its maturity in our society. One of the first ML examples I saw as a kid was in Star Trek’s The Next Generation when Lieutenant Tasha Yar trains with her holographic opponent that learns how to fight and better defeat in future battles.

In today’s society, China has developed a “lane robot” that is a guard rail controlled by a computer system that can direct the flow of traffic into different lanes, increasing safety and improving traveling time. This is done automatically based on time of day and how much traffic is flowing in each direction.

Another example is Pittsburg unveiling AI traffic signals that automatically detect traffic patterns and alter the traffic lights on-the-fly. Each light is controlled independently to help reduce both the commuting time and the idling time of cars. According to the article, pilot tests have demonstrated a reduced travel time of 25% and idling time by over 40%. There are, of course, hundreds of other examples of ML technology that make intelligent decisions based on the content it consumes.

To accomplish today’s goal, I am going to demonstrate (using Node.js) how to perform a search with Twitter’s API to retrieve content that will be inputted into the ML algorithm to be analyzed. This way, you’ll be provided with characteristics about the users who wrote that specific content so that you can get a better understanding of your audience. The example application will be written using Node.js as the server.

It is beyond the scope of this article to demonstrate how to write an ML algorithm. Instead, to aid in the analysis, I will demonstrate how to use IBM’s Watson to help you understand the general personality of your social media audience.

What Is IBM Watson?

In 2011, Watson began as a computer system that attempted to index the (entire) Internet. It was originally programmed to answer questions posed in ordinary English. Watson competed and won on the TV show Jeopardy! claiming a $1,000,000 cash prize.

Watson was now a proven success.

With the fame of winning on Jeopardy!, IBM has continued to push Watson’s capabilities. Watson has evolved into an enterprise-level application that is focused on Artificial Intelligence (AI) which you can train to identify what you care about most allowing you to make smarter decisions automatically.

The suite of Watson’s services is divided into six high-level categories:

  1. Conversation
    The services in this category allow you to build intelligent chatbot’s or a virtual customer service agent.
  2. Knowledge
    This category is focused on teaching Watson how to interpret data to unlock hidden value and monitor trends.
  3. Vision
    This service provides the ability to tag content inside an image that is used to train Watson to be able to automatically recognize the same pattern inside of other images.
  4. Speech
    These services provide the ability to convert speech to text and the inverse, text to speech.
  5. Language
    This category is split between translating one language to another as well as interpreting the text to predict what predefined category the text belongs to.
  6. Empathy
    This category is devoted to understanding the content’s tone, personality, and emotional state. Inside this category is a service called “Personality Insights” that will be used in this article to predict the personality characteristics with the social media content we will provide it.

This article will be focusing on understanding the personality of the content that we will fetch from Twitter. However, as you can see, Watson provides many other AI features that you can explore to automate many other processes simply through training and content aggregation.

Personality Insights

Personality Insights will analyze content and help you understand the habits and preferences at an individual level and at scale. This is called the ‘personality profile.’ The profile is split into two high-level groups: Personality characteristics and Consumption preferences. These groups are further broken down into more finite components.

Note: To help understand the high-level concepts (before we deep dive into the results), the Personality Insights documentation provides this helpful summary describing how the profile is inferred from the content you provide it.


IBM Watson’s Big Five Personality Traits


Big Five Personality Traits. Image courtesy: IBM.com. (Large preview)

Personality Characteristics

The Personality Insights service infers personality characteristics based on three primary models:

  • The ‘Big Five’ personality characteristics represent the most widely used model for generally describing how a person engages with the world. The model includes five primary dimensions:
    • Agreeableness
    • Conscientiousness
    • Extraversion
    • Emotional range
    • Openness
      Note: Each dimension has six facets that further characterize an individual according to the dimension.
  • Needs describe which aspects of a product will resonate with a person. The model includes twelve characteristic needs:
    • Excitement
    • Harmony
    • Curiosity
    • Ideal
    • Closeness
    • Self-expression
    • Liberty
    • Love
    • Practicality
    • Stability
    • Challenge
    • Structure
  • Values describe motivating factors that influence a person’s decision making. The model includes five values:
    • Self-transcendence / Helping others
    • Conservation / Tradition
    • Hedonism / Taking pleasure in life
    • Self-enhancement / Achieving success
    • Open to change / Excitement

For more information, see Personality models.

Consumption preferences

Based on the personality characteristics inferred from the input text, the service can also return an indication of the author’s consumption preferences. ‘Consumption preferences’ indicate the author’s likelihood to pursue different products, services, and activities. The service groups the individual preferences into eight categories:

  • Shopping
  • Music
  • Movies
  • Reading and learning
  • Health and activity
  • Volunteering
  • Environmental concern
  • Entrepreneurship

Each category contains from one to as many as a dozen individual preferences.

Note: For more information, see Consumption preferences. For a more in-depth overview of a particular point of interest, I suggest you refer to the Personality Insights documentation.

To be effective, Watson requires a minimum of a hundred words to provide an insight into the consumer’s personality. The more words provided, the better Watson can analyze and determine the consumer’s preference.

This means, if you wish to target individuals, you will need to collect more data than one or two tweets from a specific person. However, if a user writes a product review, blog post, email, or anything else related to your company, this could be analyzed on both an individual level and at scale.

To begin, let’s start by setting up the Personality Insights service to begin analyzing a real-world example.

Configuring The Personality Insights Service

Watson is an enterprise application but they offer a free, limited service. Once you’ve created an account and are logged in, you will need to add the Personality Insight service. IBM offers a Lite plan that is free. The Lite plan is limited to 1,000 API calls per month and is automatically deleted after 30 days — perfect for our demonstration.


Create the Personality Insights Service


Create the Personality Insights Service. (Large preview)

Once the service has been added, we will need to retrieve the service’s credentials to perform API calls against it. From Watson’s Dashboard, your service should be displayed. After you’ve selected the service, you’ll find a link to view the Service credentials in the left-hand menu. You will need to create a new ‘Credential.’ A unique name is required and optional configuration parameters can be defaulted for this login. For now, we will leave the configuration options empty.

After you have created a credential, select the ‘View’ credentials link. This will display the API’s URL, your username, and password required to securely execute API calls. Save these somewhere safe as we will need them in the next step.

Testing The Personality Insights Service

To perform API calls, I am going to use Node.js. If you already have Node.js installed, you can move on to the next step; otherwise, follow the instructions to setup Node.js from the official download page.

To demonstrate how to use the Personality Insights, I am going to create a new Node.js project on my computer. With a command prompt open, navigate to the directory where your Node.js projects will be stored and create your new project:

mkdir watson-sentiments
cd watson-sentiments
npm init

To assist with making the API calls to Watson, I am going to leverage the NPM Package: Watson Developer Cloud Node.js SDK. This package can be installed via the command prompt:

npm install watson-developer-cloud --save

Before making the first call, the PersonalityInsightsV3 object needs to be instantiated with the credentials from the previous section. Begin by creating a new file called index.js that will contain the Node.js code.

Here is an example of configuring the class so it is ready to make API calls:

var PersonalityInsightsV3 = require(’watson-developer-cloud/personality-insights/v3’);
var personality_insights = new PersonalityInsightsV3(
  "url": "https://gateway.watsonplatform.net/personality-insights/api",
  "username": "**************************",
  "password": "*************",
  "version_date": "2017-12-01"
);

The personality_insights variable is what we will use to interact with the API for the Personality Insights service. Let’s review how to execute a call and return a personality profile:

var fs = require(’fs’);

personality_insights.profile(
"contentItems": [
   
         "content": "Some content that contains more than 100 words...",
         "contenttype": "text/plain",
         "created": 1447639154000,
         "id": "666073008692314113",
         "language": "en"
      
   ],
   "consumption_preferences": true
}, (err, response) => 
if (err) throw err;

fs.writeFile("results.txt", JSON.stringify(response, null, 2), function(err) 
if (err) throw err;

console.log("Results were saved!");
);
  });

The profile function accepts an array of contentItems. The ‘content’ item contains the actual content with a few additional properties identifying additional information to help Watson interpret it.

When this is executed, the results are written to a text file (the results are too large to write in the console). The result is an object that contains the following high-level properties:

  • word_count
  • The count of words interpreted
  • processed_language

The language that the content provided, e.g. (en).

  • Personality
    This is an array of the ‘Big Five’ personality characteristics (Openness, Conscientiousness, Extraversion, Agreeableness, and Emotional range). Each characteristic contains an overall percentile for that characteristic (e.g. 0.8100175318417588). To ascertain more detail, there is an array called children that provides more in-depth insight. For example, a child category under ‘Openness’ is ‘Adventurousness’ that contains its own percentile.
  • Needs
    This is an array of the twelve characteristics that define the aspects a person will resonate with a product (Excitement, Harmony, Curiosity, Ideal, Closeness, Self-expression, Liberty, Love, Practicality, Stability, Challenge, and Structure). Each characteristic contains a percentile of how the content was interpreted.
  • Values
    This is an array of the five characteristics that describe motivating factors that influence a person’s decision making (Self-transcendence / Helping others, Conservation / Tradition, Hedonism / Taking pleasure in life, Self-enhancement / Achieving success, and Open to change / Excitement). Each characteristic contains a percentile of how the content was interpreted.
  • Behavior
    This is an array that contains thirty-one elements. Each element provides a percentile of when the content was created. Seven of the elements define the days of the week (Sunday through Saturday). The remaining twenty-four elements define the hours of the day. This helps you understand when customer’s interact with your product.
  • consumption_preferences
    This is an array that contains eight different categories with as much as a twelve child categories providing a percentile of likelihood to pursue different products, services, and activities (Shopping, Music, Movies, Reading and learning, Health and activity, Volunteering, Environmental concern, and Entrepreneurship).
  • Warnings
    This is an array that provides messages if a problem was encountered interpreting the content provided.

Here is a CodePen of the formatted results:

See the Pen Example Watson Results by Jamie Munro (@endyourif) on CodePen.

Configuring Twitter

To search Twitter for relevant tweets, I am going to use the Twitter NPM package. From a console window where the application is hosted, run the following command to install:

npm install twitter --save

Before we can implement the Twitter package, you need to create a Twitter application.


Retrieving Twitter’s Access Tokens


Retrieving Twitter’s Access Tokens. (Large preview)

Once you’ve created your application, you need to retrieve the authorization keys required to perform API calls. With your application created, navigate to the ‘Keys’ and ‘Access Tokens’ page. Since we are not performing API calls against users of Twitter, OAuth integration is not required. Instead, we need only the four following keys:

  1. Consumer Key
  2. Consumer Secret
  3. Access Token
  4. Access Token Secret

The last two keys need to be generated near the bottom of the ‘Keys’ and ‘Access Tokens’ page. With the keys, here is an example of searching for Tweets about #SmashingMagazine:

var Twitter = require(’twitter’);

var client = new Twitter(
  consumer_key: ’*********************’,
  consumer_secret: ’******************’,
  access_token_key: ’******************’,
  access_token_secret: ’****************’
);

client.get(’search/tweets’,  q: ’#SmashingMagazine’ , function(error, tweets, response) 
if(error) throw error;

console.log(tweets);
);

The result of this code will log a list tweets about Smashing Magazine. For the purposes of this demonstration, the following fields are of interest to us:

  1. id
  2. created_at
  3. text
  4. metadata.iso_language_code

These are the fields we will feed Watson.

Integrating Personality Insights With Twitter

With Twitter setup and Watson setup, it’s time to integrate the two together and see the results. To make it interesting, let’s search for #DonaldTrump to see what the world thinks about the President of the United States. Here is the code example to search Twitter, feed the results into Watson, and write the results to a text file:

var fs = require(’fs’);
var Twitter = require(’twitter’);

var client = new Twitter(
  consumer_key: ’*********************’,
  consumer_secret: ’******************’,
  access_token_key: ’******************’,
  access_token_secret: ’****************’
);

var PersonalityInsightsV3 = require(’watson-developer-cloud/personality-insights/v3’);
var personality_insights = new PersonalityInsightsV3(
  "url": "https://gateway.watsonplatform.net/personality-insights/api",
  "username": "**************************",
  "password": "*************",
  "version_date": "2017-12-01"
);

client.get(’search/tweets’,  q: ’#DonaldTrump’ , function(error, tweets, response) 
if(error) throw error;

var contentItems = [];

// Loop through the tweets
for (var i = 0; i < tweets.statuses.length; i++) 
var tweet = tweets.statuses[i];

contentItems.push(
"content": tweet.text,
"contenttype": "text/plain",
"created": new Date(tweet.created_at).getTime(),
"id": tweet.id,
"language": tweet.metadata.iso_language_code
);
}

// Call Watson with the tweets
personality_insights.profile(
"contentItems": contentItems,
"consumption_preferences": true
, (err, response) => 
if (err) throw err;

// Write the results to a file
fs.writeFile("results.txt", JSON.stringify(response, null, 2), function(err) 
if (err) throw err;

console.log("Results were saved!");
);
});
});

Here is another CodePen of the formatted results that I received:

See the Pen Donald Trump Watson Results by Jamie Munro (@endyourif) on CodePen.

What Do The Results Say?

Once we’ve analyzed the ‘Openness’ trait of the ‘Big Five,’ we can infer the following:

  • Emotion is quite low at 13%
  • Imagination is average at 54%
  • Intellect is very high at 96%
  • Authority challenging is also quite high at 87%

The ‘Conscientiousness’ trait at a high-level is average at 46% compared with the ‘Openness’ high-level average of 88%. Whereas ‘Agreeableness’ is very low at only 25%. I guess people on Twitter don’t like to agree with Donald Trump.

Moving on to the ‘Needs.’ The sub-categories of ‘Curiosity’ and ‘Structure’ are in the 60 percentile compared to other categories being below the 10th percentile (Excitement, Harmony, etc.).

And finally, under ‘Values,’ the sub-category that stands out to me as interesting is the ‘Openness’ to ‘Change’ at an abysmal 6%.

Based on when you perform your search, your results may vary as the results are limited to the past seven days from executing the example.

From these results, I would determine that the average person who tweets about Donald Trump is quite intellectual, challenges authority, and is not open to change.

With these results, it would allow you to automatically alter how you would target your content towards your audience to match the results received. You will need to determine what categories are of interest and what percentiles do you wish to target. With this ammunition, you can begin automating.

What Else Can I Do With Watson?

As I mentioned at the beginning of this article, Watson offers many other different services. With these services, you could automate many different parts of common business processes. For example:

  • Building a chat bot that can intelligently answer questions based on a knowledge base of information;
  • Build an application where you dictate what you want written to Watson by using the speech to text functionality;
  • Automatically translate your content into different languages to create a multi-lingual site or knowledge base;
  • Teach Watson how to look for specific patterns in images. This could be used to determine if a logo is embedded into a photo.

This, of course, is a very small subset that my limited imagination can postulate. I’m sure you can think of many other ways to leverage Watson’s immense capabilities.

If you are looking for more examples, IBM has an entire GitHub repository dedicated to their Node.js SDK. The example folder contains over ten sample applications that convert speech to text, text to speech, tone analysis, and visual recognition to name just a few.

Conclusion

Before Watson can runaway with technological growth, resulting in the singularity where Artificial Intelligence destroys mankind, this article demonstrated how you can turn social media content into a powerful understanding of how the people creating the content think. Using the results from Watson, your application can use the categories of interest where the percentile exceeds or is less than a predetermined amount to change how you target your audience.

If you have other interesting uses of Watson or how you are using the Personality Insights, be sure to leave a comment below.

Smashing Editorial
(rb, ra, yk, il)


See more here – 

Analyzing Your Company’s Social Media Presence With IBM Watson And Node.js

How BBC Interactive Content Works Across AMP, Apps, And The Web

In the Visual Journalism team at the BBC, we produce exciting visual, engaging and interactive content, ranging from calculators to visualizations new storytelling formats.

Each application is a unique challenge to produce in its own right, but even more so when you consider that we have to deploy most projects in many different languages. Our content has to work not only on the BBC News and Sports websites but on their equivalent apps on iOS and Android, as well as on third-party sites which consume BBC content.

Now consider that there is an increasing array of new platforms such as AMP, Facebook Instant Articles, and Apple News. Each platform has its own limitations and proprietary publishing mechanism. Creating interactive content that works across all of these environments is a real challenge. I’m going to describe how we’ve approached the problem at the BBC.

Example: Canonical vs. AMP

This is all a bit theoretical until you see it in action, so let’s delve straight into an example.

Here is a BBC article containing Visual Journalism content:


Screenshot of BBC News page containing Visual Journalism content


Our Visual Journalism content begins with the Donald Trump illustration, and is inside an iframe

This is the canonical version of the article, i.e., the default version, which you’ll get if you navigate to the article from the homepage.

Now let’s look at the AMP version of the article:


Screenshot of BBC News AMP page containing same content as before, but content is clipped and has a Show More button


This looks like the same content as the normal article, but is pulling in a different iframe designed specifically for AMP

While the canonical and AMP versions look the same, they are actually two different endpoints with different behavior:

  • The canonical version scrolls you to your chosen country when you submit the form.
  • The AMP version doesn’t scroll you, as you cannot scroll the parent page from within an AMP iframe.
  • The AMP version shows a cropped iframe with a ‘Show More’ button, depending on viewport size and scroll position. This is a feature of AMP.

As well as the canonical and AMP versions of this article, this project was also shipped to the News App, which is yet another platform with its own intricacies and limitations. So how do we do support all of these platforms?

Tooling Is Key

We don’t build our content from scratch. We have a Yeoman-based scaffold which uses Node to generate a boilerplate project with a single command.

New projects come with Webpack, SASS, deployment and a componentization structure out of the box. Internationalization is also baked into our projects, using a Handlebars templating system. Tom Maslen writes about this in detail in his post, 13 tips for making responsive web design multi-lingual.

Out of the box, this works pretty well for compiling for one platform but we need to support multiple platforms. Let’s delve into some code.

Embed vs. Standalone

In Visual Journalism, we sometimes output our content inside an iframe so that it can be a self-contained “embed” in an article, unaffected by the global scripting and styling. An example of this is the Donald Trump interactive embedded in the canonical example earlier in this article.

On the other hand, sometimes we output our content as raw HTML. We only do this when we have control over the whole page or if we require really responsive scroll interaction. Let’s call these our “embed” and “standalone” outputs respectively.

Let’s imagine how we might build the “Will a robot take your job?” interactive in both the “embed” and “standalone” formats.


Two screenshots side by side. One shows content embedded in a page; the other shows the same content as a page in its own right.


Contrived example showing an ‘embed’ on the left, versus the content as a ‘standalone’ page on the right

Both versions of the content would share the vast majority of their code, but there would be some crucial differences in the implementation of the JavaScript between the two versions.

For example, look at the ‘Find out my automation risk’ button. When the user hits the submit button, they should be automatically scrolled to their results.

The “standalone” version of the code might look like this:

button.on('click', (e) => 
    window.scrollTo(0, resultsContainer.offsetTop);
);

But if you were building this as “embed” output, you know that your content is inside an iframe, so would need to code it differently:

// inside the iframe
button.on('click', () => 
    window.parent.postMessage( name: 'scroll', offset: resultsContainer.offsetTop , '*');
});

// inside the host page
window.addEventListener('message', (event) => 
    if (event.data.name === 'scroll') 
        window.scrollTo(0, iframe.offsetTop + event.data.offset);
    
});

Also, what if our application needs to go full screen? This is easy enough if you’re in a “standalone” page:

document.body.className += ' fullscreen';
.fullscreen 
    position: fixed;
    top: 0;
    left: 0;
    right: 0;
    bottom: 0;


Screenshot of map embed with 'Tap to Interact' overlay, followed by a screenshot of the map in full screen mode after it has been tapped.


We successfully use full-screen functionality to make the most of our map module on mobile

If we tried to do this from inside an “embed,” this same code would have the content scaling to the width and height of the iframe, rather than the viewport:


Screenshot of map example as before, but full screen mode is buggy. Text from the surrounding article is visible where it shouldn't be.


It can be difficult going full screen from within an iframe

…so in addition to applying the full-screen styling inside the iframe, we have to send a message to the host page to apply styling to the iframe itself:

// iframe
window.parent.postMessage( name: 'window:toggleFullScreen' , '*');

// host page
window.addEventListener('message', function () 
    if (event.data.name === 'window:toggleFullScreen') 
       document.getElementById(iframeUid).className += ' fullscreen';
    
});

This can translate into a lot of spaghetti code when you start supporting multiple platforms:

button.on('click', (e) => 
    if (inStandalonePage()) 
        window.scrollTo(0, resultsContainer.offsetTop);
    
    else 
        window.parent.postMessage( name: 'scroll', offset: resultsContainer.offsetTop , '*');
    }
});

Imagine doing an equivalent of this for every meaningful DOM interaction in your project. Once you’ve finished shuddering, make yourself a relaxing cup of tea, and read on.

Abstraction Is Key

Rather than forcing our developers to handle these conditionals inside their code, we built an abstraction layer between their content and the environment. We call this layer the ‘wrapper.’

Instead of querying the DOM or native browser events directly, we can now proxy our request through the wrapper module.

import wrapper from 'wrapper';
button.on('click', () => 
    wrapper.scrollTo(resultsContainer.offsetTop);
);

Each platform has its own wrapper implementation conforming to a common interface of wrapper methods. The wrapper wraps itself around our content and handles the complexity for us.


UML diagram showing that when our application calls the standalone wrapper scroll method, the wrapper calls the native scroll method in the host page.


Simple ‘scrollTo’ implementation by the standalone wrapper

The standalone wrapper’s implementation of the scrollTo function is very simple, passing our argument directly to window.scrollTo under the hood.

Now let’s look at a separate wrapper implementing the same functionality for the iframe:


UML diagram showing that when our application calls the embed wrapper scroll method, the embed wrapper combines the requested scroll position with the offset of the iframe before triggering the native scroll method in the host page.


Advanced ‘scrollTo’ implementation by the embed wrapper

The “embed” wrapper takes the same argument as in the “standalone” example but manipulates the value so that the iframe offset is taken into account. Without this addition, we would have scrolled our user somewhere completely unintended.

The Wrapper Pattern

Using wrappers results in code that is cleaner, more readable and consistent between projects. It also allows for micro-optimisations over time, as we make incremental improvements to the wrappers to make their methods more performant and accessible. Your project can, therefore, benefit from the experience of many developers.

So, what does a wrapper look like?

Wrapper Structure

Each wrapper essentially comprises three things: a Handlebars template, wrapper JS file, and a SASS file denoting wrapper-specific styling. Additionally, there are build tasks which hook into events exposed by the underlying scaffolding so that each wrapper is responsible for its own pre-compilation and cleanup.

This is a simplified view of the embed wrapper:

embed-wrapper/
    templates/
        wrapper.hbs
    js/
        wrapper.js
    scss/
        wrapper.scss

Our underlying scaffolding exposes your main project template as a Handlebars partial, which is consumed by the wrapper. For example, templates/wrapper.hbs might contain:

<div class="bbc-news-vj-wrapper--embed">
    >your-application}
</div>

scss/wrapper.scss contains wrapper-specific styling that your application code shouldn’t need to define itself. The embed wrapper, for example, replicates a lot of BBC News styling inside the iframe.

Finally, js/wrapper.js contains the iframed implementation of the wrapper API, detailed below. It is shipped separately to the project, rather than compiled in with the application code — we flag wrapper as a global in our Webpack build process. This means that though we deliver our application to multiple platforms, we only compile the code once.

Wrapper API

The wrapper API abstracts a number of key browser interactions. Here are the most important ones:

scrollTo(int)

Scrolls to the given position in the active window. The wrapper will normalise the provided integer before triggering the scroll so that the host page is scrolled to the correct position.

getScrollPosition: int

Returns the user’s current (normalized) scroll position. In the case of the iframe, this means that the scroll position passed to your application is actually negative until the iframe is at the top of the viewport. This is super useful and lets us do things such as animate a component only when it comes into view.

onScroll(callback)

Provides a hook into the scroll event. In the standalone wrapper, this is essentially hooking into the native scroll event. In the embed wrapper, there will be a slight delay in receiving the scroll event since it is passed via postMessage.

viewport: height: int, width: int

A method to retrieve the viewport height and width (since this is implemented very differently when queried from within an iframe).

toggleFullScreen

In standalone mode, we hide the BBC menu and footer from view and set a position: fixed on our content. In the News App, we do nothing at all — the content is already full screen. The complicated one is the iframe, which relies on applying styles both inside and outside the iframe, coordinated via postMessage.

markPageAsLoaded

Tell the wrapper your content has loaded. This is crucial for our content to work in the News App, which will not attempt to display our content to the user until we explicitly tell the app our content is ready. It also removes the loading spinner on the web versions of our content.

List Of Wrappers

In the future, we envisage creating additional wrappers for large platforms such as Facebook Instant Articles and Apple News. We have created six wrappers to date:

Standalone Wrapper

The version of our content that should go in standalone pages. Comes bundled with BBC branding.

Embed Wrapper

The iframed version of our content, which is safe to sit inside articles or to syndicate to non-BBC sites, since we retain control over the content.

AMP Wrapper

This is the endpoint which is pulled in as an amp-iframe into AMP pages.

News App Wrapper

Our content must make calls to a proprietary bbcvisualjournalism:// protocol.

Core Wrapper

Contains only the HTML — none of our project’s CSS or JavaScript.

JSON Wrapper

A JSON representation of our content, for sharing across BBC products.

Wiring Wrappers Up To The Platforms

For our content to appear on the BBC site, we provide journalists with a namespaced path:

/include/[department]/[unique ID], e.g. /include/visual-journalism/123-quiz

The journalist puts this “include path” into the CMS, which saves the article structure into the database. All products and services sit downstream of this publishing mechanism. Each platform is responsible for choosing the flavor of content it wants and requesting that content from a proxy server.

Let’s take that Donald Trump interactive from earlier. Here, the include path in the CMS is:

/include/newsspec/15996-trump-tracker/english/index

The canonical article page knows it wants the “embed” version of the content, so it appends /embed to the include path:

/include/newsspec/15996-trump-tracker/english/index/embed

…before requesting it from the proxy server:

https://news.files.bbci.co.uk/include/newsspec/15996-trump-tracker/english/index/embed

The AMP page, on the other hand, sees the include path and appends /amp:

/include/newsspec/15996-trump-tracker/english/index/amp

The AMP renderer does a little magic to render some AMP HTML which references our content, pulling in the /amp version as an iframe:

<amp-iframe src="https://news.files.bbci.co.uk/include/newsspec/15996-trump-tracker/english/index/amp" width="640" height="360">
    <!-- some other AMP elements here -->
</amp-iframe>

Every supported platform has its own version of the content:

/include/newsspec/15996-trump-tracker/english/index/amp

/include/newsspec/15996-trump-tracker/english/index/core

/include/newsspec/15996-trump-tracker/english/index/envelope

...and so on

This solution can scale to incorporate more platform types as they arise.

Abstraction Is Hard

Building a “write once, deploy anywhere” architecture sounds quite idealistic, and it is. For the wrapper architecture to work, we have to be very strict on working within the abstraction. This means we have to fight the temptation to “do this hacky thing to make it work in [insert platform name here].” We want our content to be completely unaware of the environment it is shipped in — but this is easier said than done.

Features Of The Platform Are Hard To Configure Abstractly

Before our abstraction approach, we had complete control over every aspect of our output, including, for example, the markup of our iframe. If we needed to tweak anything on a per-project basis, such as add a title attribute to the iframe for accessibility reasons, we could just edit the markup.

Now that the wrapper markup exists in isolation from the project, the only way of configuring it would be to expose a hook in the scaffold itself. We can do this relatively easily for cross-platform features, but exposing hooks for specific platforms breaks the abstraction. We don’t really want to expose an ‘iframe title’ configuration option that’s only used by the one wrapper.

We could name the property more generically, e.g. title, and then use this value as the iframe title attribute. However, it starts to become difficult to keep track of what is used where, and we risk abstracting our configuration to the point of no longer understanding it. By and large, we try to keep our config as lean as possible, only setting properties that have global use.

Component Behaviour Can Be Complex

On the web, our sharetools module spits out social network share buttons that are individually clickable and open a pre-populated share message in a new window.


Screenshot of BBC sharetools section containg Twitter and Facebook social media icons.


BBC Visual Journalism sharetools present a list of social share options

In the News App, we don’t want to share through the mobile web. If the user has the relevant application installed (e.g. Twitter), we want to share in the app itself. Ideally, we want to present the user with the native iOS/Android share menu, then let them choose their share option before we open the app for them with a pre-populated share message. We can trigger the native share menu from the app by making a call to the proprietary bbcvisualjournalism:// protocol.


Screenshot of the share menu on Android with options for sharing via Messaging, Bluetooth, Copy to clipboard, and so on.


Native share menu on Android

However, this screen will be triggered whether you tap ‘Twitter’ or ‘Facebook’ in the ‘Share your results’ section, so the user ends up having to make their choice twice; the first time inside our content, and a second time on the native popup.

This is a strange user journey, so we want to remove the individual share icons from the News app and show a generic share button instead. We are able to do this by explicitly checking which wrapper is in use before we render the component.


Screenshot of the news app share button. This is a single button with the following text: 'Share how you did'.


Generic share button used in the News App

Building the wrapper abstraction layer works well for projects as a whole, but when your choice of wrapper affects changes at the component level, it’s very difficult to retain a clean abstraction. In this case, we’ve lost a little abstraction, and we have some messy forking logic in our code. Thankfully, these cases are few and far between.

How Do We Handle Missing Features?

Keeping abstraction is all well and good. Our code tells the wrapper what it wants the platform to do, e.g. “go full screen.” But what if the platform we’re shipping to can’t actually go full-screen?

The wrapper will try its best not to break altogether, but ultimately you need a design which gracefully falls back to a working solution whether or not the method succeeds. We have to design defensively.

Let’s say we have a results section containing some bar charts. We often like to keep the bar chart values at zero until the charts are scrolled into view, at which point we trigger the bars animating to their correct width.


Screenshot of a collection of bar charts comparing the user's area with the national averages. Each bar has its value displayed as text to the right of the bar.


Bar chart showing values relevant to my area

But if we have no mechanism to hook into the scroll position — as is the case in our AMP wrapper — then the bars would forever remain at zero, which is a thoroughly misleading experience.


Same screenshot of bar charts as before, but bars have 0&#37; width and the values of each bar are fixed at 0&#37;. This is incorrect.


How the bar chart could look if scrolling events aren’t forwarded

We are increasingly trying to adopt more of a progressive enhancement approach in our designs. For example, we could provide a button which will be visible for all platforms by default, but which gets hidden if the wrapper supports scrolling. That way, if the scroll fails to trigger the animation, the user can still trigger the animation manually.


Same screenshot of bar charts as the incorrect 0&#37; bar charts, but this time with a subtle grey overlay and a centered button inviting the user to 'View results'.


We could display a fallback button instead, which triggers the animation on click.

Plans For The Future

We hope to develop new wrappers for platforms such as Apple News and Facebook Instant Articles, as well as to offer all new platforms a ‘core’ version of our content out of the box.

We also hope to get better at progressive enhancement; succeeding in this field means developing defensively. You can never assume all platforms now and in the future will support a given interaction, but a well-designed project should be able to get its core message across without falling at the first technical hurdle.

Working within the confines of the wrapper is a bit of a paradigm shift, and feels like a bit of a halfway house in terms of the long-term solution. But until the industry matures onto a cross-platform standard, publishers will be forced to roll out their own solutions, or use tooling such as Distro for platform-to-platform conversion, or else ignore entire sections of their audience altogether.

It’s early days for us, but so far we’ve had great success in using the wrapper pattern to build our content once and deliver it to the myriad of platforms our audiences are now using.

Smashing Editorial
(rb, ra, il)

See the original post:  

How BBC Interactive Content Works Across AMP, Apps, And The Web

Getting Started In Public Speaking: Global Diversity CFP Day

A CFP, or Call For Proposals (sometimes also known as a Call For Papers), is a request for speakers to send their proposed talk ideas to a conference. The conference will review the proposals and decide who to ask to speak. Popular conferences can receive hundreds of proposals for a handful of speaking slots, therefore creating a great proposal is an important skill to learn as a speaker. To help encourage people to write and submit to CFPs, Global Diversity CFP Day aims to help underrepresented people submit proposals to speak at conferences.

Source:  

Getting Started In Public Speaking: Global Diversity CFP Day

How To Streamline WordPress Multisite Migrations With MU-Migration

Migrating a standalone WordPress site to a site network (or “multisite”) environment is a tedious and tricky endeavor, the opposite is also true. The WordPress Importer works reasonably well for smaller, simpler sites, but leaves room for improvement. It exports content, but not site configuration data such as Widget and Customizer configurations, plugins, and site settings. The Importer also struggles to handle a large amount of content. In this article, you’ll learn how to streamline this type of migration by using MU-Migration, a WP-CLI plugin.

Link: 

How To Streamline WordPress Multisite Migrations With MU-Migration

Building Better UI Designs With Layout Grids

Designers of all types constantly face issues with the structure of their designs. One of the easiest ways to control the structure of a layout and to achieve a consistent and organized design is to apply a grid system.
A grid is like invisible glue that holds a design together. Even when elements are physically separated from each other, something invisible connects them together.
While grids and layout systems are a part of the heritage of design, they’re still relevant in this multiscreen world we live in.

Link to article – 

Building Better UI Designs With Layout Grids

Influencer Outreach: 5 Pro Tips for Stunning Success

influencer-marketing-blog
One share from an influencer can massively impact your traffic. Image via Shutterstock.

Here’s a bombshell: All that well-written, well-optimized content you’ve been developing probably won’t give you the business boost you’re looking for.

Behind the most impressive online success stories, you won’t find a pithy blog. Instead, you’ll find smart, strategic influencer outreach.

A recent Tomoson poll revealed that “[b]usinesses are making $6.50 for every $1 spent on influencer marketing.” Not only that, marketers rate influencer marketing as their fastest-growing online customer-acquisition method, above organic search and email marketing.

influencer-marketing-graph
Influencer marketing is the fastest growing customer-acquisition method among online marketers.

It makes sense when you review the benefits…

One share from an influencer can massively impact your traffic. I’ve seen it happen with posts on the Crazy Egg blog. And on my own blog, a post that cites 16 experts has received double the traffic of the next-most-popular blog post, thanks to those experts’ shares.

Participation from just one big-name influencer can give your roundup post, podcast or interview series a ton of traction. Suddenly, other industry leaders are eager to contribute. After all, they want to be seen in the same “category” as Big Name Influencer.

And finally, having a personal relationship with influencers can significantly boost your own credibility. Which means you can get more subscribers and followers with less effort.

You’ve got the traffic — now how do you convert it?

Here’s a little inspiration: 10 overlay examples to turn your blog traffic into leads.
By entering your email you’ll receive weekly Unbounce Blog updates and other resources to help you become a marketing genius.

There’s just one problem: Influencers are busy — and notoriously hard to reach. You have just one chance to reach out to them in a way that opens doors instead of slamming them shut.

Blow it, and you may never get another opportunity to connect with them.

Outreach matters, obviously. But it’s critical to do it right. So I asked PR expert Dmitry Dragilev, founder of JustReachOut.io, to share his five-step process.

Dmitry used PR and content marketing to grow his business from 0 to 3,500 customers in the first year and generate $100,000 revenue in just nine months. His product, JustReachOut, was designed specifically for this type of work, helping you find and pitch relevant journalists and bloggers by searching keywords, competitors, niches, publications and more.

The key is to understand that outreach isn’t about getting what you need as quickly as possible. It’s about taking the time to turn big-name influencers into long-term friends. In a moment, we’ll review Dmitry’s process for doing just that. But first…

Why influencer outreach matters

influencer-outreach
Relationship building is key to influencer outreach.

According to Dmitry, two overriding principles guide influencer outreach: value in advance and relationship building.

Interestingly, these are the words we use most often when talking about marketing. We tend to think that’s what we’re doing when we offer free content such as lead magnets, webinars or video training.

And in the link building emails I get, I’m sure the sender thinks his compliment in the opening line is a value-add that gets my attention.

But when it comes to outreach, Dmitry recommends a more personal approach. Most people, when doing outreach, focus on their own needs. They don’t want to take time to build relationships by providing value up front.

Dmitry says,

That’s a huge mistake. It’s a relationship that will get you the results you want and keep getting you mentions and links in the future.

You have to slow the pace so you can build a deeper, more authentic relationship — which, in the long run, will benefit you more.

Dmitry is emphatic that you shouldn’t ask for or expect a quick transaction. Remember, whether you’re asking for a link,coverage or a mention, that person doesn’t know you at all.

When you’re drafting your outreach email, ask yourself, “Would I actually say this to a person if I saw them at a conference? Would I walk up to a person I don’t know and make an immediate pitch?”

The key to influencer outreach is to begin the virtual relationship in the same way you would a live relationship: start with common ground, talk about them, let them tell you what they need.

By giving value before you ask for anything in return, you’ve got a much higher chance of getting what you want.

Now let’s look at the framework Dmitry uses when he does influencer outreach.

1. Your why

Always start with a goal in mind. What action would you like the influencer to take? Why do you want to connect with him or her?

You won’t necessarily start the conversion with your goal, but you need to have a legitimate reason for reaching out.

2. Finding influencers

The type of person you choose to reach out to depends on your goals. If you want publicity, look for a journalist. If you want a product review, look for a content marketer or blogger who does reviews. For a celebrity mention, find a celebrity who is involved in your industry or respected within your niche.

Where do you find these people?

The quickest path is through a tool designed for the purpose. JustReachOut is one choice. I’ve played around with Mailshake (formerly ContentMarketer.io) and Ninja Outreach and can recommend them as well.

But if you’re budget is tight, you can search for influencers manually in forums such as Reddit and Quora, or through HelpAReporter, Twitter or ProfNet queries.

Once you’ve identified an influencer, you need to do some research. Learn as much as you can about what they’re doing and look for ways to help them.

giphy-1
Except we don’t call it stalking. We call it following. Image via Giphy.

As an example, Dmitry wanted to see if he could get an interview with Ashton Kutcher for his speaker series.

He began by trying to figure out what Ashton’s motivations might be. Knowing he’s trying to break into the startup world and start investing, Dmitry guessed the actor was trying to network and learn as much as he could.

Dmitry also began following Ashton’s work, including his speaking engagements and social media activity. The goal? To identify the people he’s quoting or talking about.

Next, he developed a strategy for his outreach. He identified some experts whom Ashton seemed to admire, reasoning, “if I have maybe a quarter of those people on my speaker series maybe I can reach out and say, ‘I have these other people lined up. Would you be willing to speak as well?’”

It worked like a charm. After scheduling those experts for his speaking series, Dmitry finally reached out to Ashton, and the answer was gracious. “Yeah, I really do admire a lot of people you have on your guest list.” The actor connected Dmitry with his assistant and they’ve been in touch ever since.

Notice that Dmitry actively looked for a point of intersection, so his email would feel authentic and credible.

3. Contacting influencers

Most outreach emails follow a word-for-word template.

Big mistake!

Is it any wonder those emails get deleted? In many cases, the influencer has seen that template hundreds of times already.

To get an influencer’s attention, you need to be human. Be yourself. And before hitting “send,” review your email carefully to be sure it sounds authentic. Here’s what Dmitry recommends:

I think there is something to be said for just reading your email as if someone sent it to you. Is it interesting or overly pushy?

Then test it out on your readers or your friends. How did they feel by the time you asked for a sale? Did you provide enough value upfront?

But before trying to craft your email, you need to clarify two things:

  1. Your value offer. What’s in it for them? Make sure you offer more value than you ask in return.
  2. Your pitch. Find a point of intersection, then add credibility. Your offer has to be meaningful to them.

As an example, look at the email Dmitry first sent to me. The subject line was “your mention of ContentMarketer, I’m friends with founder.” And the email read:

Hey Kathryn, 

My name is Dmitry Dragilev and I am the founder of JustReachOut.io. I help startups and entrepreneurs hack pitching and getting press mentions weekly without the help of PR firms.  

I stumbled across your article today, remember this? 

https://mirasee.com/blog/promotion-guidelines/

You mentioned ContentMarketer.io in the article, I love the service, I’m old friend of Sujan Patel the founder actually, we write articles on Forbes together actually.

I’m a regular contributor to Entrepreneur, TechCrunch, HuffPo, Mashable, TNW, Inc, FastCo and many others.

I grew the last startup I worked on from 0 to 40M+ pageviews through PR outreach and we got acquired by Google, I automated the same PR outreach process to build JustReachOut.io algorithm, we have 2K+ startups currently paying and using us to pitch press.

I am not here to brag I promise! I simply want to connect with you :)

Two things:

  1. I have some PR hacks I wanted to share with you, maybe you could use them in your next article.
  2. I thought you might be interested in learning about JustReachOut.io. I will be very happy to give you a special extended trial for a test run.

I respect your schedule and will completely understand if don’t have the time. No hard feelings. But you will miss the chance to make my day and to learn how to get press on the biggest outlets in the world.

Have an awesome Friday. 

-Dmitry 

Let’s look at the structure he follows:

  1. Introduce yourself. Tell them who you are
  2. Identify a point of intersection. Something you’ve both said or someone you both know. You can be creative, but it needs to be genuine.
  3. Add a bit of credibility. In other words, why you’re worth talking to about this topic.
  4. Make an offer. It should be something that adds value to the recipient, so they feel comfortable responding.

4. Following up (the right way)

relax
Relax. Follow-up takes time. Don’t rush it.

Dmitry has found that moving too fast can derail your efforts. Take it slow, he says, and you’ll get better results.

  • Don’t be aggressive.
  • Don’t lose patience.
  • Don’t push for immediate results.

Your focus should be on adding value over time, not immediately achieving your goal.

It’s critical to slow down your time frame so the relationship can evolve naturally. You need to hold off asking for anything until you’ve built trust and reciprocity.

For example, Dmitry still hasn’t been able to schedule that interview with Ashton Kutcher, but there’s no reason to rush. They continue to correspond about three times a year.

5. Adopting a value-first mindset

Influencer outreach isn’t easy. It takes patience and perseverance — and a commitment to giving at least as much as you receive. Dmitry says the key is to start now, before you need an influencer, so the value exchange is already in your favor.

If you wait until the last minute to begin your outreach campaign, you’ve put yourself in the position of needing results quickly. Then you’ll do everything wrong.

That being the case, don’t tack outreach onto the end of your campaigns or content promotion. You need to be building and nurturing relationships all the time.

The bottom line

Too often as content marketers, we’re focused on creating quality content, scheduling social media and doing a lot of technical tasks for promotion. In many cases, moving quickly from one task to another is how you get results.

Influencer outreach is just the opposite. For success, you need to slow your pace, focus on the people you’re contacting and help them reach their goals.

It may look like a distraction or a low-ROI activity. In reality, it’s an investment that can pay huge dividends down the road.

What are your biggest outreach challenges? Share in the comments below.

Visit source – 

Influencer Outreach: 5 Pro Tips for Stunning Success

Content-First Prototyping

Content is the core commodity of the digital economy. It is the gold we fashion into luxury experience, the diamond we encase in loyalty programs and upsells. Yet, as designers, we often plug it in after the fact. We prototype our interaction and visual design to exhaustion, but accept that the “real words” can just be dropped in later. There is a better way.
More and more, the digital goods we create operate within a dynamic system of content, functionality, code and intent.

Continue reading:  

Content-First Prototyping

Thumbnail

Getting Back Into The (Right) Deliverables Business

“Get out of the deliverables business” has become quite a mantra in the lean startup and UX movements. There’s much to love in that sentiment — after all, for every wireframe you make, you’re not shipping code to customers.
But I’m worried that, just like with the concept of a minimum viable product, we’ve taken this sound advice to an extreme that’s actually hurtful to the creation of good products.

See the article here: 

Getting Back Into The (Right) Deliverables Business

How I Built The One Page Scroll Plugin

Scrolling effects have been around in web design for years now, and while many plugins are available to choose from, only a few have the simplicity and light weight that most developers and designers are looking for. Most plugins I’ve seen try to do too many things, which makes it difficult for designers and developers to integrate them in their projects.
Further reading on Smashing: Infinite Scrolling: Let’s Get To The Bottom Of This Get the Scrolling Right Reapplying Hick’s Law of Narrowing Decision Architecture Advanced Navigation With Two Independent Columns Takeaways From Mobile Web Behavior Not long ago, Apple introduced the iPhone 5S, which was accompanied by a presentation website on which visitors were guided down sections of a page and whose messaging was reduced to one key function per section.

Source:  

How I Built The One Page Scroll Plugin