Tag Archives: chrome

Thumbnail

The Importance Of Manual Accessibility Testing




The Importance Of Manual Accessibility Testing

Eric Bailey



Earlier this year, a man drove his car into a lake after following directions from a smartphone app that helps drivers navigate by issuing turn-by-turn directions. Unfortunately, the app’s programming did not include instructions to avoid roads that turn into boat launches.

From the perspective of the app, it did exactly what it was programmed to do, i.e. to find the most optimal route from point A to point B given the information made available to it. From the perspective of the man, it failed him by not taking the real world into account.

The same principle applies for accessibility testing.

Designing For Accessibility And Inclusion

The more inclusive you are to the needs of your users, the more accessible your design is. Let’s take a closer look at the different lenses of accessibility through which you can refine your designs. Read article →

Automated Accessibility Testing

I am going to assume that you’re reading this article because you’re interested in learning how to test your websites and web apps to ensure they’re accessible. If you want to learn more about why accessibility is necessary, the topic has been covered extensively elsewhere.

Automated accessibility testing is a process where you use a series of scripts to test for the presence, or lack of certain conditions in code. These conditions are dictated by the Web Content Accessibility Guidelines (WCAG), a standard by the W3C that outlines how to make digital experiences accessible.

For example, an automated accessibility test might check to see if the tabindex attribute is present and if its value is greater than 0. The pseudocode would be something like:


A flowchart that asks if the tabindex value is present. If yes, it asks if the tabindex value is greater than 0. If it is greater than zero, it fails. If not, it passes. If no tabindex value is present, it also passes.

Failures can then be collected and used to generate reports that disclose the number, and severity of accessibility issues. Certain automated accessibility products can also integrate as a Continuous Integration or Continuous Deployment (CI/CD) tool, presenting just-in-time warnings to developers when they attempt to add code to a central repository.

These automated programs are incredible resources. Modern websites and web apps are complicated things that involve hundreds of states, thousands of lines of code, and complicated multi-screen interactions. It’d be absurd to expect a human (or a team of humans) to mind all the code controlling every possible permutation of the site, to say nothing of things like regressions, software rot, and A/B tests.

Automation really shines here. It can repeatedly and tirelessly pour over these details with perfect memory, at a rate far faster than any human is capable of.

However…

Automated accessibility tests aren’t a turnkey solution, nor are they a silver bullet. There are some limitations to keep in mind when using them.

Thinking To Think Of Things

One of both the best and worst aspects of the web is that there are many different ways to implement a solution to a problem. While this flexibility has kept the web robust and adaptable and ensured it outlived other competing technologies, it also means that you’ll sometimes see code that is, um, creatively implemented.

The test suite is only as good as what its author thought to check for. A naïve developer might only write tests for the happy path, where everyone writes semantic HTML, fault-tolerant JavaScript, and well-scoped CSS. However, this is the real world. We need to acknowledge that things like tight deadlines, unfamiliarity with the programming language, atypical user input, and sketchy 3rd party scripts exist.

For example, the automated accessibility testing site Tenon.io wisely includes a rule that checks to see if a form element has both a label element and an aria-label associated with it, and if the text strings for both declarations differ. If they do, it will flag it as an issue, as the visible label may be different than what someone would hear if they were navigating using a screen reader.

If you’re not using a testing service that includes this rule, it won’t be reported. The code will still “pass”, but it’s passing by omission, not because it’s actually accessible.

State

Some automated accessibility tests cannot parse the various states of interactive content. Critical parts of the user interface are effectively invisible to automation unless the test is run when the content is in an active, selected, or disabled state.

By interactive content, I mean things that the user has yet to take action on, or aren’t present when the page loads. Unopened modals, collapsed accordions, hidden tab content and carousel slides are all examples.

It takes sophisticated software to automatically test the various states of every component within a single screen, let alone across an entire web app or website. While it is possible to augment testing software with automated accessibility checks, it is very resource-intensive, usually requiring a dedicated team of engineers to set up and maintain.

“Valid” Markup

Accessible Rich Internet Applications (ARIA) is a set of attributes that extend HTML to allow it to describe interaction in a way that can be better understood by assistive technologies. For example, the aria-expanded attribute can be toggled by JavaScript to programmatically communicate if a component is in an expanded (true) or collapsed (false) state. This is superior to toggling a CSS class like .is-expanded, where the update in state is only communicated visually.

Just having the presence of ARIA does not guarantee that it will automatically make something accessible. Unfortunately, and in spite of its first rule of use, ARIA is commonly misunderstood, and consequently abused. A lot of off-the-shelf code has this problem, perpetuating the issue.

For example, certain ARIA attributes and values can only be applied to certain elements. If incorrectly applied, assistive technology will ignore or misreport the declaration. Certain roles, known as Abstract Roles, only exist to set up the overall taxonomy and should never be placed in markup.

<button role="command">Save</button>

<!-- Never do this -->

To further complicate the issue, support for ARIA is varied across browsers. While an attribute may be used appropriately, the browser may not communicate the declared role, property, or state to assistive technology.

There is also the scenario where ARIA can be applied to an element and be valid from a technical standpoint, yet be unusable from an assistive technology perspective. For example:

<h1 aria-hidden=“true”>
  Tired of unevenly cooked asparagus? Try this tip from the world’s oldest cookbook.
</h1>

This one Weird Trick.

The aria-hidden declaration will remove the presence of content from assistive technology, yet allow it to be still rendered visibly on the page. It’s a problematic pattern.

Headings — especially first-level headings — are vital in communicating the purpose of a page. If a person is using assistive technology to navigate, the aria-hidden declaration applied to the h1 element will make it difficult for them to quickly determine the page’s purpose. It will force them to navigate around the rest of the page to gain context, an annoying and labor-intensive process.

Some automated accessibility tests may scan the code and not report an error since the syntax itself is valid. The automation has no way of knowing the greater context of the declaration’s use.

This isn’t to say you should completely avoid using ARIA! When authored with care and deliberation, ARIA can fix the gaps in accessibility that sometimes plague complicated interactions; it provides some much-needed context to the people who rely on assistive technology.

Much-Needed Context

As the soggy car demonstrates, computers are awful at understanding the overall situation of the outside world. It’s up to us humans to be the ultimate arbiters in determining if what the computer spits out is useful or not.

Debunking

Before we discuss how to provide appropriate context, there are a few common misunderstandings about accessibility work that need to be addressed:

First, not all screen reader users are blind. In addition to all the points Adrian Roselli outlines in his post, some food for thought: the use of voice assistants is on the rise. When’s the last time you spoke to Siri or Alexa?

Second, accessibility is more than just screen readers. The rules outlined in the Web Content Accessibility Guidelines ensure that the largest number of people can read and operate technology, regardless of ability or circumstance.

For example, the rule that stipulates a website or web app needs to be able to work regardless of device orientation benefits everyone. Some people may need to mount their device in a fixed location in a specific orientation, such as in landscape mode on the arm of a wheelchair. Others might want to lie in bed and watch a movie, or better investigate a product photo (pinch and pull zooming will also be helpful to have here).

Third, disabilities can be conditional and can be brought about by your environment. It can be a short-term thing, like rain on your glasses, sleep deprivation, or an allergies-induced migraine. It can also be longer-term, such as a debilitating illness, broken limb, or a depressive episode. Multiple, compounding conditions can (and do) affect individuals.

That all being said, many accessibility fixes that help screen readers work properly also benefit other assistive technologies.

Get Your Feet Wet

Knowing where to begin can be overwhelming. Consider Michiel Bijl’s great advice:

“Before you release a website, tab through it. If you cannot see where you are on the page after each tab; you're not finished yet. #a11y

Tab through a few of the main user flows on your website or web app to determine if all interactive components’ focus states are visually apparent, and if they can be activated via keyboard input. If there’s something you can click or tap on that isn’t getting highlighted when receiving keyboard focus, take note of it. Also pay attention to the order interactive components are highlighted when focused — it should match the reading order of the site.

An obvious focus state and logical tab order go a great way to helping make your site accessible. These two features benefit a wide variety of assistive technology, including, but not limited to, screen readers.

If you need a baseline to compare your testing to, Dave Rupert has an excellent project called A11Y Nutrition Cards, which outlines expected behavior for common interactive components. In addition, Scott O’Hara maintains a project called a11y Styled Form Controls. This project provides examples of components such as switches, checkboxes, and radio buttons that have well-tested and documented support for assistive technology. A clever reader might use one of these resources to help them try out the other!


A screenshot of homepage for the a11y Styled Form Controls website placed over a screenshot of the Nutrition Cards for Accessible Components website.


(Large preview)

The Fourth Myth

With that out of the way, I’m going to share a fourth myth with you: not every assistive technology user is a power user. Like with any other piece of software, there’s a learning curve involved.

In her post about Aaptiv’s redesign, Lisa Zhu discovers that their initial accessibility fix wasn’t intuitive. While their first implementation was “technically” correct, it didn’t line up with how people who rely on VoiceOver actually use their devices. A second solution simplified the interaction to better align with their expectations.

Don’t assume that just because something hypothetically functions that it’s actually usable. Trust your gut: if it feels especially awkward, cumbersome, or tedious to operate for you, chances are it’ll be for others.

Dive Right In

While not every accessibility issue is a screen reader issue, you should still get in the habit of testing your site with one. Not an emulator, simulator, or some other proxy solution.

If you find yourself struggling to operate a complicated interactive component using basic screen reader commands, it’s probably a sign that the component needs to be simplified. Chances are that the simplification will help non-assistive technology users as well. Good design benefits everyone!

The same goes for navigation. If it’s difficult to move around the website or web app, it’s probably a sign that you need to update your heading structure and landmark roles. Both of these features are used by assistive technology to quickly and efficiently navigate.


Two code examples for a sidebar. One uses a div element, while the others uses an aside element. Both have the class of sidebar applied to them, with a subheading of Recent Posts.


Both of these are sidebars, but only one of them is semantically described as such. A computer doesn’t know what a sidebar is, so it’s up to you to tell it.

Another good thing to review is the text content used to describe your links. Hopping from link to link is another common assistive technology navigation technique; some screen readers can even generate a list of all link content on the page:

“Think before you link! Your "helpful" click here links look like this to a screen reader user. ALT = JAWS links list”

Tweet by Neil Milliken

Neil Milliken

When navigating using an ordered list devoid of the surrounding non-link content, avoiding ambiguous terms like “click here” or “more info” can go a long way to ensuring a person can understand the overall meaning of the page. As a bonus, it’ll help alleviate cognitive concerns for everyone, as you are more accurately explaining what a user should expect after activating a link.

How To Test

Each screen reader has a different approach to how it announces content. This is intentional. It’s a balancing act between the product’s features, the operating system it is installed on, the form factor it is available in, and the types of input it can receive.

The Browser Wars taught us the folly of developing for only one browser. Similarly, we should not cater to a single screen reader. It is important to note that many people rely exclusively on a specific screen reader and browser combination — by circumstance, preference, or necessity’making this all the more important. However, there is a caveat: each screen reader works better when used with a specific browser, typically the one that allows it access to the greatest amount of accessibility API information.

All of these screen readers can be used for free, provided you have the hardware. You can also virtualize that hardware, either for free or on the cheap.

Automate

Automated accessibility tests should be your first line of defense. They will help you catch a great deal of nitpicky, easily-preventable errors before they get committed. Repeated errors may also signal problems in template logic, where one upstream tweak can fix multiple pages. Identifying and resolving these issues allows you to spend your valuable manual testing time much more wisely.

It may also be helpful to log accessibility issues in a place where people can collaborate, such as Google Sheets. Quantifying the frequency and severity of errors can lead to good things like updated documentation, opportunities for lunch and learn education, and other healthy changes to organizational workflow.

Much like manual testing with a variety of screen readers, it is recommended that you use a combination of automated tools to prevent gaps.

Windows

The two most popular screen readers on Windows are JAWS and NVDA.

JAWS

JAWS (Job Access With Speech) is the most popular and feature-rich screen reader on the market. It works best with Firefox and Chrome, with concessions for supporting Internet Explorer. Although it is pay software, it can be operated in full in demo mode for 40 minutes at a time (this should be more than sufficient to perform basic testing).

NVDA

NVDA (NonVisual Desktop Access) is free, although a donation is strongly encouraged. It is a feature-rich alternative to JAWS. It works best with Firefox.

Narrator

Windows comes bundled with a built-in screen reader called Narrator. It works well with Edge, but has difficulty interfacing with other browsers.

Apple

macOS

VoiceOver is a powerful screen reader that comes bundled with macOS. Use it in conjunction with Safari, first making sure that full keyboard access is enabled.

iOS

VoiceOver is also included in iOS, and is the most popular mobile screen reader. Much like its desktop counterpart, it works best with Safari. An interesting note here is that according to the 2017 WebAIM screen reader survey, a not-insignificant amount of respondents augment their phone with external hardware keyboards.

Android

Google recently folded TalkBack, their mobile screen reader, into a larger collection of accessibility services called the Android Accessibility Suite. It works best with Mobile Chrome. While many Android apps are notoriously inaccessible, it is still worth testing on this platform. Android’s growing presence in emerging markets, as well as increasing internet use amongst elderly and lower-income demographics, should give pause for consideration.

Popular screen readers
Screen Reader Platform Preferred Browser(s) Manual Launch Quit
JAWS Windows Chrome, Firefox JAWS 2018 Documentation Launch JAWS as you would any other Windows application Insert + F4
NVDA Windows Firefox NVDA 2018.2.1 User Guide Ctrl + Alt + N Insert + Q
Narrator Windows Edge Get started with Narrator Windows key + Control + Enter Windows key + Control + Enter
VoiceOver macOS Safari VoiceOver Getting Started Guide Command + F5 or tap the Touch ID button 3 times Command + F5 or tap the Touch ID button 3 times
Mobile VoiceOver iOS Mobile Safari VoiceOver overview – iPhone User Guide Tell Siri to, “Turn on VoiceOver.” or activate in Settings Tell Siri to, “Turn off VoiceOver.” or deactivate in Settings
Android Accessibility Suite Android Mobile Chrome Get started on Android with TalkBack Press both volume keys for 3 seconds Press both volume keys for 3 seconds

Call The Professionals

If you do not require the use of assistive technology on a frequent basis then you do not fully understand how the people who do interact with the web.

Much like traditional user testing, being too close to the thing you created may cloud your judgment. Empathy exercises are a good way to become aware of the problem space, but you should not use yourself as a litmus test for whether the entire experience is truly accessible. You are not the expert.

If your product serves a huge population of users, if its core base of users trends towards having a higher probability of disability conditions (specialized product, elderly populations, foreign language speakers, etc.), and/or if it is required to be compliant by law, I would strongly encourage allocating a portion of your budget for testing by people with disabilities.

“At what point does your organisation stop supporting a browser in terms of % usage? 18% of the global pop. have an #Accessibility requirement, 2% people have a colour vision deficient. But you consider 2% IE usage support more important? Support everyone be inclusive.”

Mark Wilcock

This isn’t to say you should completely delegate the responsibility to these testers. Much as how automated accessibility testing can detect smaller issues to remove, a first round of basic manual testing helps professional testers focus their efforts on the complicated interactions you need an expert’s opinion on. In addition to optimizing the value of their time, it helps to get you more comfortable triaging. It is also a professional courtesy, plain and simple.

There are a few companies that perform manual testing by people with disabilities:

Designed Experiences

We also need to acknowledge the other large barrier to accessible sites that can’t be automated away: poor user experience.

User experience can make or break a product. Your code can compile perfectly, your time to first paint can be lightning quick, and your Webpack setup can be beyond reproach. All this is irrelevant if the end result is unusable. User experience encompasses all users, including those who navigate with the aid of assistive technology.

If a person cannot operate your website or web app, they’ll abandon it and not think twice. If they are forced to use your site to get a service unavailable by other means, there’s a growing precedent for taking legal action (and rightly so).

As a discipline, user experience can be roughly divided into two parts: how something looks and how it behaves They’re intrinsically interlinked concepts — work on either may affect both. While accessible design is a topic unto itself, there are some big-picture things we can keep in mind when approaching accessible user experiences from a testing perspective:

How It Looks

The WCAG does a great job covering a lot of the basics of good design. Color contrast, font size, user-facing state: a lot of these things can be targeted by automation. What you should pay attention to is all the atomic, difficult to quantify bits that compound to create your designs. Things like the words you choose, the fonts you use to display them, the spacing between things, affordances for interaction, the way you handle your breakpoints, etc.

“A good font should tell you:
the difference between m and rn
the difference between I and l
the difference between O and 0.”

mallory, alice & bob

It’s one of those “an ounce of prevention is worth a pound of cure” situations. Smart, accessible defaults can save countless time and money down the line. Lean and mean startups all the way up to multinational conglomerates value efficient use of resources, and this is one of those places where you can really capitalize on that. Put your basic design patterns — say collected in something like a mood board or living style guide — in front of people early and often to see if your designed intent is clear.

How It Behaves

An enticing color palette and collection of thoughtfully-curated stock photography only go so far. Eventually, you’re going to have to synthesize all your design decisions to create something that addresses a need.

Behavior can be as small as a microinteraction, or as large as finding a product and purchasing it. What’s important here is to make sure that all the barriers to a person trying to accomplish the task at hand are removed.

If you’re using personas, don’t create a separate persona for a user with a disability. Instead, blend accessibility considerations into your existing ones. As a persona is an abstracted representation of the types of users you want to cater to, you want to make sure the kinds of conditions they may be experiencing are included. Disability conditions aren’t limited to just physical impairments, either. Things like a metered data plan, non-native language, or anxiety are all worth integrating.

“When looking at your site's analytics, remember that if you don't see many users on lower end phones or from more remote areas, it's not because they aren't a target for your product or service. It is because your mobile experience sucks.
As a developer, it's your job to fix it.”

Estelle Weyl

User testing, ideally simulating conditions as close to what a person would be doing in the real world (including their individual device preferences and presence of assistive technology), is also key. Verifying that people are actually able to make the logical leaps necessary to operate your interface addresses a lot of cognitive concerns, a difficult-to-quantify yet vital thing to accommodate.

We Shape Our Tools, Our Tools Shape Us

Our tool use corresponds to the kind of work we do: Carpenters drive nails with hammers, chefs cook using skillets, surgeons cut with scalpels. It’s a self-reinforcing phenomenon, and it tends to lead to over-categorization.

Sometimes this over-categorization gets in the way of us remembering to consider the real world. A surgeon might have a carpentry hobby; a chef might be a retired veterinarian. It’s important to understand that accessibility is everyone’s responsibility, and there are many paths to making our websites and web apps the best they can be for everyone. To paraphrase Mikey Ilagan, accessibility is a holistic practice, essential to some but useful to all.

Used with discretion, ARIA is a very good tool to have at our disposal. We shouldn’t shy away from using it, provided we understand the how and why behind why they work.

The same goes for automated accessibility tests, as well as GPS apps. They’re great tools to have, just get to know the terrain a little bit first.

Resources

Automated Accessibility Tools

Professional Services

References

Quick Tests

Further Reading

Smashing Editorial
(rb, ra, yk, il)


Read the article: 

The Importance Of Manual Accessibility Testing

Thumbnail

Take A New Look At CSS Shapes




Take A New Look At CSS Shapes

Rachel Andrew



CSS Shapes Level 1 has been available in Chrome and Safari for a number of years, however, this week it ships in a production version of Firefox with the release of Firefox 62 — along with a very nice addition to the Firefox DevTools to help us work with Shapes. In this article, I’ll take a look at some of the things you can do with CSS Shapes. Perhaps it’s time to consider adding some curves to your designs?

What Are CSS Shapes?

The CSS Shapes specification Level 1 defines three new properties:

  • shape-outside
  • shape-image-threshold
  • shape-margin

The purpose of this specification is to allow content to flow around a non-rectangular shape, something which is quite unusual on our boxy web. There are a few different ways to create shapes, which we will have a look at in this tutorial. We will also have a look at the Shape Path Editor, available in Firefox, as it can help you to easily understand the shapes on your page and work with them.

In the current specification, shapes can only be applied to a float, so any shapes example needs to start with a floated element. In the example below, I have a PNG image with a transparent background in which I have floated the image left. The text that follows the image now flows around the right and bottom of my image.

What I would like to happen is for my content to follow the shape of the opaque part of the image, rather than follow the line of the physical image file. To do this, I use the shape-outside property, with the value being the URL of my image. I’m using the actual image file to create a path for the content to flow around.

See the Pen Smashing Shapes: image by Rachel Andrew (@rachelandrew) on CodePen.

Note that your image needs to be CORS compatible, so hosted on the same server as the rest of your content or sending the correct headers if hosted on a CDN. Browser DevTools will usually tell you if your image is being blocked due to CORS.

This method of creating shapes uses the alpha channel of the image to create the shape, as we have a shape with a fully transparent area, then all we need do is pass the URL of the image to shape-outside and the shape path follows the line of the fully opaque area.

Creating A Margin

To push the line of the text away from the image we can use the shape-margin property. This creates a margin between the line of the shape and the content running alongside it.

See the Pen Smashing Shapes: shape-margin by Rachel Andrew (@rachelandrew) on CodePen.

Using Generated Content For Our Shape

In the case above, we have the image displayed on the page and then the text curved around it. However, you could also use an image as the path for the shape in order to create a curved text effect without also including the image on the page. You still need something to float, however, and so for this, we can use Generated Content.

See the Pen Smashing Shapes: generated content by Rachel Andrew (@rachelandrew) on CodePen.

In this example, we have inserted some generated content, floated it left, given it a width and a height and then used shape-outside with our image just as before. We then get a curved line against the whitespace, but no visible image.

Using A Gradient For Our Shape

A CSS gradient is just like an image, which means we can use a gradient to create a shape, which can make for some interesting effects. In this next example, I have created a gradient which goes from blue to transparent; your gradient will need to have a transparent or semi-transparent area in order to use shapes. Once again, I have used generated content to add the gradient and am then using the gradient in the value for shape-outside.

Once the gradient becomes fully transparent, then the shape comes into play, and the content runs along the edge of the gradient.

See the Pen Smashing Shapes: gradients by Rachel Andrew (@rachelandrew) on CodePen.

Using shape-image-threshold To Position Text Over A Semi-Opaque Image

So far we have looked at using a completely transparent part of an image or of a gradient in order to create our shape, however, the third property defined in the CSS Shapes specification means that we can use images or gradients with semi-opaque areas by setting a threshold. A value for shape-image-threshold of 1 means fully opaque while 0 means fully transparent.

A gradient like our example above is a great way to see this in action as we can change the shape-image-threshold value and move the line along which the text falls to more opaque areas or more transparent areas. This property works in exactly the same way with an image that has an alpha channel yet is not fully transparent.

See the Pen Smashing Shapes: shape-image-threshold by Rachel Andrew (@rachelandrew) on CodePen.

This method of creating shapes from images and gradients is — I think — the most straightforward way of creating a shape. You can create a shape as complex as you need it to be, in the comfort of a graphics application and then use that to define the shape on your page. That said, there is another way to create our shapes, and that’s by using Basic Shapes.

CSS Shapes With Basic Shapes

The Basic Shapes are a set of predefined shapes which cover a lot of different types of shapes you might want to create. To use a basic shape, you use the basic shape type as a value for shape-outside. This type uses functional notation, so we have the name of the shape followed by brackets (inside which are some values for our shape).

The options that you have are the following:

  • inset()
  • circle()
  • ellipse()
  • polygon()

We will take a look at the circle() type first as we can use this to understand some useful things which apply to all shapes which use the basic shape type. We will also have a look at the new tools in Firefox for inspecting these shapes.

In the example below, I am creating the most simple of shapes: a circle using shape-outside: circle(50%). I’m using generated content again, and I have given the box a background color, and also added a margin, border, and padding to help highlight some of the concepts of using CSS Shapes. You can see in the example that the circle is created centered on the box; this is because I have given the circle a value of 50%. That value is the <shape-radius> which can be a length or a percentage. I’ve used a percentage so that the radius is half of the size of my box.

See the Pen Smashing Shapes: shape-outside: circle() by Rachel Andrew (@rachelandrew) on CodePen.

This is a really good to time have a look at the shape that has been created using the Firefox Shape Path Editor. You can inspect the shape by clicking on the generated content and then clicking the little shape icon next to the property shape-outside; your shape will now highlight.


The shape highlighted with a line


The Shape Path Editor highlights the circle shape (Large preview)

You can see how the circle extends to the edge of the margin on our box. This is because the initial reference box used by our shape is margin-box. You already know something of reference boxes if you have ever added box-sizing: border-box to your CSS. When you do this, you are asking CSS to use the border-box and not the default content-box as the size of elements. In Shapes, we can also change which reference box is used. After any basic shape, add border-box to use the border to define the shape or content-box to use the edge of the content (inside the padding). For example:

.content::before 
    content: "";
    width: 150px;
    height: 150px;
    margin: 20px;
    padding: 20px;
    border: 10px solid #FC466B;
    background: linear-gradient(90deg, #FC466B 0%, #3F5EFB 100%);
    float: left;
    circle(50%) content-box;

You will see the circle appear to become much smaller. It is now using the width of the content — in this case the width of the box at 150px — rather than the margin box which includes the padding, border, and margin.


A smaller circle is highlighted


The content-box is the edge of the content of the square we created with our generated content (Large preview)

Inspecting your element in Firefox DevTools will also show you the reference boxes so you can choose which might give you the best result with your particular shape.


Highlights showing the margin, border and padding


Reference boxes highlighted in Firefox (Large preview)

The Position Value

A second value can be passed to circle() which is a position; if you do not pass this value, it defaults to center. However, you can use this value to pull your circle around. In the next example, I have positioned the circle by using shape-outside(50% at 30%); this changes where the center of the circle is positioned.

See the Pen Smashing Shapes: circle() with position by Rachel Andrew (@rachelandrew) on CodePen.

clip-path

Something useful to know is that the same <basic-shape> values can be used as a value for clip-path. This means that after creating a shape, you can clip away the image or background color that extends outside of the shape. In the example below, I am going to do this with our example gradient background, so that we end up with a circle that has text curved around from our square box.

See the Pen Smashing SHapes: circle() with clip-path by Rachel Andrew (@rachelandrew) on CodePen.

All of the above concepts can be applied to our other basic shapes. Now let’s have a quick look at how they work.

inset()

The inset() value defines a rectangle. This might not seem very useful as a float is a rectangle, however, this value means that you can inset the content wrapping your shape. It takes four values for top, right, bottom, and left plus a final value which defines a border radius.

In the example below, I am using the values to inset the content on the right and bottom of the floated image, plus adding a border radius around which my content will wrap using shape-outside: inset(0 30px 100px 0 round 40px). You can see how the content is now over the background color of the box:

See the Pen Smashing Shapes: inset() by Rachel Andrew (@rachelandrew) on CodePen.

ellipse()

An ellipse is a squashed circle and as such needs two radii for x and y (in that order). You can then push the ellipse around just as with circle using the position value. In the example below, I am creating an ellipse and then using clip-path with the same values to remove the content outside of my shape.

See the Pen Smashing Shapes: ellipse() by Rachel Andrew (@rachelandrew) on CodePen.

In the above example, I also used shape-margin to demonstrate how we can use this property as with our image generated shapes to push the content away.

polygon()

Creating polygon shapes gives us the most flexibility, as our shapes can be created with three or more points. The value passed to the polygon needs to be three or more pairs of values which represent coordinates.

It is here where the Firefox tools become really useful as we can use them to help create our polygon. In the below example, I have created a polygon with four points. In the Firefox DevTools, you can double-click on any line to create a new point, and double-click again to remove it. Once you have created a polygon that you are happy with, you can then copy the value out of DevTools for your CSS.

See the Pen Smashing Shapes: polygon() by Rachel Andrew (@rachelandrew) on CodePen.

Fallbacks

As CSS Shapes are applied to a float, in many cases the fallback is that instead of seeing the content wrap around a shape, the content will wrap around a floated element (in the way that content has always wrapped around floats). Browsers ignore properties they do not understand, so if they don’t understand Shapes, it doesn’t matter that the shape-outside property is there.

Where you should take care would be in any situation where not having shapes could mean that content overlaid an area which made it difficult to read. Perhaps you are using Shapes to push content away from a busy area of a background image, for example. In that case, you should first make sure that your content is usable for the non-Shapes people, then use Feature Queries to check for support of shape-outside and overwrite that CSS and apply the shape. For example, you could use a margin to push the content away for non-Shapes visitors and remove the margin inside your feature query.

.content 
    margin-left: 120px;


@supports (shape-outside: circle()) 
    .content 
        margin-left: 0;
        /* add the rest of your shapes CSS here */
    

}

With Firefox releasing their support we now only have one main browser without support for Shapes — Edge. If you want to see Shapes support across the board you could go and vote for the feature here, and see if we can encourage the implementation of the feature in Edge.

Find Out More About CSS Shapes

In this article, I’ve tried to give a quick overview of some of the interesting things that are possible with CSS Shapes. For a more in-depth look at each feature, check out the Guides to CSS Shapes over at MDN. You can also read a guide to the Shape Path Editor in Firefox.

Smashing Editorial
(il)


Link to article – 

Take A New Look At CSS Shapes

Thumbnail

Website quality assurance: How to make certain your experiments don’t fail before they launch

According to Donald Norman, the user’s final experience of your website is at a reflective level: Was the experience delightful…Read blog postabout:Website quality assurance: How to make certain your experiments don’t fail before they launch

The post Website quality assurance: How to make certain your experiments don’t fail before they launch appeared first on WiderFunnel Conversion Optimization.

Follow this link – 

Website quality assurance: How to make certain your experiments don’t fail before they launch

Thumbnail

User Experience Testing: UX Methods and Tools

user-experience-testing-12

User experience testing often scares entrepreneurs and marketers. It seems like a daunting task, especially if you have lots of products or lots of pages on your website. However, it’s essential if you want more conversions. ClickMechanic conducted extensive user experience testing before relaunching its website. The testing resulted in an impressive 50 percent increase in conversions. Furthermore, a Magnetic North study revealed that more than 90 percent of respondents had had a poor user experience, and that about 33 percent reported they would abandon an online shopping cart due to poor UX. Clearly, there’s a correlation between user experience…

The post User Experience Testing: UX Methods and Tools appeared first on The Daily Egg.

Read the article – 

User Experience Testing: UX Methods and Tools

Thumbnail

Creating The Feature Queries Manager DevTools Extension




Creating The Feature Queries Manager DevTools Extension

Ire Aderinokun



Within the past couple of years, several game-changing CSS features have been rolled out to the major browsers. CSS Grid Layout, for example, went from 0 to 80% global support within the span of a few months, making it an incredibly useful and reliable tool in our arsenal. Even though the current support for a feature like CSS Grid Layout is relatively great, not all recent or current browsers support it. This means it’s very likely that you and I will currently be developing for a browser in which it is not supported.

The modern solution to developing for both modern and legacy browsers is feature queries. They allow us to write CSS that is conditional on browser support for a particular feature. Although working with feature queries is almost magical, testing them can be a pain. Unlike media queries, we can’t easily simulate the different states by just resizing the browser. That’s where the Feature Queries Manager comes in, an extension to DevTools to help you easily toggle your feature query conditions. In this article, I will cover how I built this extension, as well as give an introduction to how developer tools extensions are built.

Working With Unsupported CSS

If a property-value pair (e.g. display: grid), is not supported by the browser the page is viewed in, not much happens. Unlike other programming languages, if something is broken or unsupported in CSS, it only affects the broken or unsupported rule, leaving everything else around it intact.

Let’s take, for example, this simple layout:

The layout in a supporting browser


Large preview

We have a header spanning across the top of the page, a main section directly below that to the left, a sidebar to the right, and a footer spanning across the bottom of the page.

Here’s how we could create this layout using CSS Grid:

See the Pen layout-grid by Ire Aderinokun (@ire) on CodePen.

In a supporting browser like Chrome, this works just as we want. But if we were to view this same page in a browser that doesn’t support CSS Grid Layout, this is what we would get:

The layout in an unsupporting browser


Large preview

It is essentially the same as if we had not applied any of the grid-related styles in the first place. This behavior of CSS was always intentional. In the CSS specification, it says:

In some cases, user agents must ignore part of an illegal style sheet, [which means to act] as if it had not been there

Historically, the best way to handle this has been to make use of the cascading nature of CSS. According to the specification, “the last declaration in document order wins.” This means that if there are multiple of the same property being defined within a single declaration block, the latter prevails.

For example, if we have the follow declarations:

body 
  display: flex;
  display: grid;

Assuming both Flexbox and Grid are supported in the browser, the latter — display: grid — will prevail. But if Grid is not supported by the browser, then that rule is ignored, and any previous valid and supported rules, in this case display: flex, are used instead.

body 
  display: flex;
  display: grid;

Cascading Problems

Using the cascade as a method for progressive enhancement is and has always been incredibly useful. Even today, there is no simpler or better way to handle simple one-liner fallbacks, such as this one for applying a solid colour where the rgba() syntax is not supported.

div 
    background-color: rgb(0,0,0);
    background-color: rgba(0,0,0,0.5);

Using the cascade, however, has one major limitation, which comes into play when we have multiple, dependent CSS rules. Let’s again take the layout example. If we were to attempt to use this cascade technique to create a fallback, we would end up with competing CSS rules.

See the Pen layout-both by Ire Aderinokun (@ire) on CodePen.

In the fallback solution, we need to use certain properties such as margins and widths, that aren’t needed and in fact interfere with the “enhanced” Grid version. This makes it difficult to rely on the cascade for more complex progressive enhancement.

Feature Queries To The Rescue!

Feature queries solve the problem of needing to apply groups of styles that are dependent on the support of a CSS feature. Feature queries are a “nested at-rule” which, like the media queries we are used to, allow us to create a subset of CSS declarations that are applied based on a condition. Unlike media queries, whose condition is dependent on device and screen specs, feature query conditions are instead based on if the browser supports a given property-value pair.

A feature query is made up of three parts:

  1. The @supports keyword
  2. The condition, e.g. display: flex
  3. The nested CSS declarations.

Here is how it looks:

@supports (display: grid) 
    body  display: grid; 
}

If the browser supports display: grid, then the nested styles will apply. If the browser does not support display: grid, then the block is skipped over entirely.

The above is an example of a positive condition within a feature query, but there are four flavors of feature queries:

  1. Positive condition, e.g. @supports (display grid)

  2. Negative condition, e.g. @supports not (display: grid)

  3. Conjunction, e.g. @supports (display:flex) and (display: grid)

  4. Disjunction, e.g. @supports (display:-ms-grid) or (display: grid)

Feature queries solve the problem of having separate fallback and enhancement groups of styles. Let’s see how we can apply this to our example layout:

See the Pen Run bunny run by Ire Aderinokun (@ire) on CodePen.

Introducing The Feature Queries Manager

When we write media queries, we test them by resizing our browser so that the styles at each breakpoint apply. So how do we test feature queries?

Since feature queries are dependent on whether a browser supports a feature, there is no easy way to simulate the alternative state. Currently, the only way to do this would be to edit your code to invalidate/reverse the feature query.

For example, if we wanted to simulate a state in which CSS Grid is not supported, we would have to do something like this:

/* fallback styles here */

@supports (display: grrrrrrrrid) 
    /* enhancement styles here */

This is where the Feature Queries Manager comes in. It is a way to reverse your feature queries without ever having to manually edit your code.

(Large preview)

It works by simply negating the feature query as it is written. So the following feature query:

@supports (display: grid) 
    body  display: grid; 
}

Will become the following:

@supports not (display: grid) 
    body  display: grid; 
}

Fun fact, this method works for negative feature queries as well. For example, the following negative feature query:

@supports not (display: grid) 
    body  display: block; 
}

Will become the following:

@supports not (not (display: grid)) 
    body  display: block; 
}

Which is actually essentially the same as removing the “not” from the feature query.

@supports (display: grid) 
    body  display: block; 
}

Building The Feature Queries Manager

FQM is an extension to your browser’s Developer Tools. It works by registering all the CSS on a page, filtering out the CSS that is nested within a feature query, and giving us the ability to toggle the normal or “inverted” version of that feature query.

Creating A DevTools Panel

Before I go on to how I specifically built the FQM, let’s cover how to create a new DevTools panel in the first place. Like any other browser extension, we register a DevTools extension with the manifest file.


  "manifest_version": 2,
  "name": "Feature Queries Manager",
  "short_name": "FQM",
  "description": "Manage and toggle CSS on a page behind a @supports Feature Query.",
  "version": "0.1",
  "permissions": [
    "tabs",
    "activeTab",
    "<all_urls>"
  ],
  "icons": 
    "128": "images/icon@128.png",
    "64": "images/icon@64.png",
    "16": "images/icon@16.png",
    "48": "images/icon@48.png"
  
}

To create a new panel in DevTools, we need two files — a devtools_page, which is an HTML page with an attached script that registers the second file, panel.html, which controls the actual panel in DevTools.

The devtools script creates the panel page


Large preview

First, we add the devtools_page to our manifest file:


  "manifest_version": 2,
  "name": "Feature Queries Manager",
  ...
  "devtools_page": "devtools.html",

Then, in our devtools.html file, we create a new panel in DevTools:

<!DOCTYPE html>
<html>
<head>
  <meta charset="utf-8"></head>
<body>
<!-- Note: I’m using the browser-polyfill to be able to use the Promise-based WebExtension API in Chrome -->
<script src="../browser-polyfill.js"></script>

<!-- Create FQM panel -->
<script>
browser.devtools.panels.create("FQM", "images/icon@64.png", "panel.html");
</script>
</body>
</html

Finally, we create our panel HTML page:

<!DOCTYPE html>
<html>
<head>
  <meta charset="utf-8"></head>
<body>
  <h1>Hello, world!</h1>
</body>
</html>

If we open up our browser, we will see a new panel called “FQM” which loads the panel.html page.

A new panel in browser DevTools showing the “Hello, World” text


Large preview

Reading CSS From The Inspected Page

In the FQM, we need to access all the CSS referenced in the inspected document in order to know which are within feature queries. However, our DevTools panel doesn’t have direct access to anything on the page. If we want access to the inspected document, we need a content script.

The content script reads CSS from the HTML document


Large preview

A content script is a javascript file that has the same access to the html page as any other piece of javascript embedded within it. To register a content script, we just add it to our manifest file:


      "manifest_version": 2,
      "name": "Feature Queries Manager",
      ...
      "content_scripts": [
        "matches": [""],
        "js": ["browser-polyfill.js", "content.js"]
      ],
    }

In our content script, we can then read all the stylesheets and css within them by accessing document.styleSheets:

Array.from(document.styleSheets).forEach((stylesheet) => 
      let cssRules;
      
      try 
        cssRules = Array.from(stylesheet.cssRules);
       catch(err) 
        return console.warn(`[FQM] Can't read cssRules from stylesheet: $ stylesheet.href `);
      }
      
      cssRules.forEach((rule, i) => 
      
        /* Check if css rule is a Feature Query */
        if (rule instanceof CSSSupportsRule) 
          /* do something with the css rule */
        
        
      });
    });

Connecting The Panel And The Content Scripts

Once we have the rules from the content script, we want to send them over to the panel so they can be visible there. Ideally, we would want something like this:

The content script passes information to the panel and the panel sends instructions to modify CSS back to the content


Large preview

However, we can’t exactly do this, because the panel and content files can’t actually talk directly to each other. To pass information between these two files, we need a middleman — a background script. The resulting connection looks something like this:

The content and panel scripts communicate via a background script


Large preview

As always, to register a background script, we need to add it to our manifest file:


  "manifest_version": 2,
  "name": "Feature Queries Manager",
  ...
  "background": 
    "scripts": ["browser-polyfill.js", "background.js"]
  ,
}

The background file will need to open up a connection to the panel script and listens for messages coming from there. When the background file receives a message from the panel, it passes it on to the content script, which is listening for messages from the background. The background script waits for a response from the content script and relays that message back to the panel.

Here’s a basic of example of how that works:

// Open up a connection to the background script
const portToBackgroundScript = browser.runtime.connect();

// Send message to content (via background)
portToBackgroundScript.postMessage("Hello from panel!");

// Listen for messages from content (via background)
portToBackgroundScript.onMessage.addListener((msg) => 
  console.log(msg);
  // => "Hello from content!"
);
// backrgound.js

// Open up a connection to the panel script
browser.runtime.onConnect.addListener((port) => 
  
  // Listen for messages from panel
  port.onMessage.addListener((request) => 
  
    // Send message from panel.js -> content.js
    // and return response from content.js -> panel.js
    browser.tabs.sendMessage(request.tabId, request)
      .then((res) => port.postMessage(res));
  );
});
// content.js

// Listen for messages from background
browser.runtime.onMessage.addListener((msg) => 

  console.log(msg)
  // => "Hello from panel!"
  
  // Send message to panel
  return Promise.resolve("Hello from content!");
);

Managing Feature Queries

Lastly, we can get to the core of what the extension does, which is to “toggle” on/off the CSS related to a feature query.

If you recall, in the content script, we looped through all the CSS within feature queries. When we do this, we also need to save certain information about the CSS rule:

  1. The rule itself
  2. The stylesheet it belongs to
  3. The index of the rule within the stylesheet
  4. An “inverted” version of the rule.

This is what that looks like:

cssRules.forEach((rule, i) => 
  
  const cssRule = rule.cssText.substring(rule.cssText.indexOf(""));
  const invertedCSSText = `@supports not ( $ rule.conditionText  ) $ cssRule `;
  
  FEATURE_QUERY_DECLARATIONS.push( 
    rule: rule,
    stylesheet: stylesheet,
    index: i, 
    invertedCSSText: invertedCSSText
  );
  
});

When the content script receives a message from the panel to invert all declarations relating to the feature query condition, we can easily replace the current rule with the inverted one (or vice versa).

function toggleCondition(condition, toggleOn) 
  FEATURE_QUERY_DECLARATIONS.forEach((declaration) => 
    if (declaration.rule.conditionText === condition) 
      
      // Remove current rule
      declaration.stylesheet.deleteRule(declaration.index);
      
      // Replace at index with either original or inverted declaration
      const rule = toggleOn ? declaration.rule.cssText : declaration.invertedCSSText;
      declaration.stylesheet.insertRule(rule, declaration.index);
        
  });
}

And that is essentially it! The Feature Query Manager extension is currently available for Chrome and Firefox.

Limitations Of The FQM

The Feature Queries Manager works by “inverting” your feature queries, so that the opposite condition applies. This means that it cannot be used in every scenario.

Fallbacks

If your “enhancement” CSS is not written within a feature query, then the extension cannot be used as it is dependent on finding a CSS supports rule.

Unsupported Features

You need to take note of if the browser you are using the FQM in does or does not support the feature in question. This is particularly important if your original feature query is a negative condition, as inverting it will turn it into a positive condition. For example, if you wrote the following CSS:

div  background-color: blue; 

@supports not (display: grid) 
  div  background-color: pink; 
}

If you use the FQM to invert this condition, it will become the following:

div  background-color: blue; 

@supports (display: grid) 
  div  background-color: pink; 
}

For you to be able to actually see the difference, you would need to be using a browser which does in fact support display: grid.

I built the Feature Queries Manager as a way to more easily test the different CSS as I develop, but it isn’t a replacement for testing layout in the actual browsers and devices. Developer tools only go so far, nothing beats real device testing.

Smashing Editorial
(ra, yk, il)


Continued: 

Creating The Feature Queries Manager DevTools Extension

Thumbnail

Managing SVG Interaction With The Pointer Events Property




Managing SVG Interaction With The Pointer Events Property

Tiffany Brown



Try clicking or tapping the SVG image below. If you put your pointer in the right place (the shaded path) then you should have Smashing Magazine’s homepage open in a new browser tab. If you tried to click on some white space, you might be really confused instead.

See the Pen Amethyst by Tiffany Brown (@webinista) on CodePen.

This is the dilemma I faced during a recent project that included links within SVG images. Sometimes when I clicked the image, the link worked. Other times it didn’t. Confusing, right?

I turned to the SVG specification to learn more about what might be happening and whether SVG offers a fix. The answer: pointer-events.

Not to be confused with DOM (Document Object Model) pointer events, pointer-events is both a CSS property and an SVG element attribute. With it, we can manage which parts of an SVG document or element can receive events from a pointing device such as a mouse, trackpad, or finger.

A note about terminology: “pointer events” is also the name of a device-agnostic, web platform feature for user input. However, in this article — and for the purposes of the pointer-events property — the phrase “pointer events” also includes mouse and touch events.

Outside Of The Box: SVG’s “Shape Model”

Using CSS with HTML imposes a box layout model on HTML. In the box layout model, every element generates a rectangle around its contents. That rectangle may be inline, inline-level, atomic inline-level, or block, but it’s still a rectangle with four right angles and four edges. When we add a link or an event listener to an element, the interactive area matches the dimensions of the rectangle.

Note: Adding a clip-path to an interactive element alters its interactive bounds. In other words, if you add a hexagonal clip-path path to an a element, only the points within the clipping path will be clickable. Similarly, adding a skew transformation will turn rectangles into rhomboids.

SVG does not have a box layout model. You see, when an SVG document is contained by an HTML document, within a CSS layout, the root SVG element adheres to the box layout model. Its child elements do not. As a result, most CSS layout-related properties don’t apply to SVG.

So instead, SVG has what I’ll call a ‘shape model’. When we add a link or an event listener to an SVG document or element, the interactive area will not necessarily be a rectangle. SVG elements do have a bounding box. The bounding box is defined as: the tightest fitting rectangle aligned with the axes of that element’s user coordinate system that entirely encloses it and its descendants. But initially, which parts of an SVG document are interactive depends on which parts are visible and/or painted.

Painted vs. Visible Elements

SVG elements can be “filled” but they can also be “stroked”. Fill refers to the interior of a shape. Stroke refers to its outline.

Together, “fill” and “stroke” are painting operations that render elements to the screen or page (also known as the canvas). When we talk about painted elements, we mean that the element has a fill and/or a stroke. Usually, this means the element is also visible.

However, an SVG element can be painted without being visible. This can happen if the visible attribute value or CSS property is hidden or when display is none. The element is there and occupies theoretical space. We just can’t see it (and assistive technology may not detect it).

Perhaps more confusingly, an element can also be visible — that is, have a computed visibility value of visible — without being painted. This happens when elements lack both a stroke and a fill.

Note: Color values with alpha transparency (e.g. rgba(0,0,0,0)) do not affect whether an element is painted or visible. In other words, if an element has an alpha transparent fill or stroke, it’s painted even if it can’t be seen.

Knowing when an element is painted, visible, or neither is crucial to understanding the impact of each pointer-events value.

All Or None Or Something In Between: The Values

pointer-events is both a CSS property and an SVG element attribute. Its initial value is auto, which means that only the painted and visible portions will receive pointer events. Most other values can be split into two groups:

  1. Values that require an element to be visible; and
  2. Values that do not.

painted, fill, stroke, and all fall into the latter category. Their visibility-dependent counterparts — visiblePainted, visibleFill, visibleStroke and visible — fall into the former.

The SVG 2.0 specification also defines a bounding-box value. When the value of pointer-events is bounding-box, the rectangular area around the element can also receive pointer events. As of this writing, only Chrome 65+ supports the bounding-box value.

none is also a valid value. It prevents the element and its children from receiving any pointer events. The pointer-events CSS property can be used with HTML elements too. But when used with HTML, only auto and none are valid values.

Since pointer-events values are better demonstrated than explained, let’s look at some demos.

Here we have a circle with a fill and a stroke applied. It’s both painted and visible. The entire circle can receive pointer events, but the area outside of the circle cannot.

See the Pen Visible vs painted in SVG by Tiffany Brown (@webinista) on CodePen.

Disable the fill, so that its value is none. Now if you try to hover, click, or tap the interior of the circle, nothing happens. But if you do the same for the stroke area, pointer events are still dispatched. Changing the fill value to none means that this area visible, but not painted.

Let’s make a small change to our markup. We’ll add pointer-events="visible" to our circle element, while keeping fill=none.

See the Pen How adding pointer-events: all affects interactivity by Tiffany Brown (@webinista) on CodePen.

Now the unpainted area encircled by the stroke can receive pointer events.

Augmenting The Clickable Area Of An SVG Image

Let’s return to the image from the beginning of this article. Our “amethyst” is a path element, as opposed to a group of polygons each with a stroke and fill. That means we can’t just add pointer-events="all" and call it a day.

Instead, we need to augment the click area. Let’s use what we know about painted and visible elements. In the example below, I’ve added a rectangle to our image markup.

See the Pen Augmenting the click area of an SVG image by Tiffany Brown (@webinista) on CodePen.

Even though this rectangle is unseen, it’s still technically visible (i.e. visibility: visible). Its lack of a fill, however, means that it is not painted. Our image looks the same. Indeed it still works the same — clicking white space still doesn’t trigger a navigation operation. We still need to add a pointer-events attribute to our a element. Using the visible or all values will work here.

See the Pen Augmenting the click area of an SVG image by Tiffany Brown (@webinista) on CodePen.

Now the entire image can receive pointer events.

Using bounding-box would eliminate the need for a phantom element. All points within the bounding box would receive pointer events, including the white space enclosed by the path. But again: pointer-events="bounding-box" isn’t widely supported. Until it is, we can use unpainted elements.

Using pointer-events When Mixing SVG And HTML

Another case where pointer-events may be helpful: using SVG inside of an HTML button.

See the Pen Ovxmmy by Tiffany Brown (@webinista) on CodePen.

In most browsers — Firefox and Internet Explorer 11 are exceptions here — the value of event.target will be an SVG element instead of our HTML button. Let’s add pointer-events="none" to our opening SVG tag.

See the Pen How pointer-events: none can be used with SVG and HTML by Tiffany Brown (@webinista) on CodePen.

Now when users click or tap our button, the event.target will refer to our button.

Those well-versed in the DOM and JavaScript will note that using the function keyword instead of an arrow function and this instead of event.target fixes this problem. Using pointer-events="none" (or pointer-events: none; in your CSS), however, means that you don’t have to commit that particular JavaScript quirk to memory.

Conclusion

SVG supports the same kind of interactivity we’re used to with HTML. We can use it to create charts that respond to clicks or taps. We can create linked areas that don’t adhere to the CSS and HTML box model. And with the addition of pointer-events, we can improve the way our SVG documents behave in response to user interaction.

Browser support for SVG pointer-events is robust. Every browser that supports SVG supports the property for SVG documents and elements. When used with HTML elements, support is slightly less robust. It isn’t available in Internet Explorer 10 or its predecessors, or any version of Opera Mini.

We’ve just scratched the surface of pointer-events in this piece. For a more in-depth, technical treatment, read through the SVG Specification. MDN (Mozilla Developer Network) Web Docs offers more web developer-friendly documentation for pointer-events, complete with examples.

Smashing Editorial
(rb, ra, yk, il)


Link to original:

Managing SVG Interaction With The Pointer Events Property

Thumbnail

A Guide To The State Of Print Stylesheets In 2018




A Guide To The State Of Print Stylesheets In 2018

Rachel Andrew



Today, I’d like to return to a subject that has already been covered in Smashing Magazine in the past — the topic of the print stylesheet. In this case, I am talking about printing pages directly from the browser. It’s an experience that can lead to frustration with enormous images (and even advertising) being printed out. Just sometimes, however, it adds a little bit of delight when a nicely optimized page comes out of the printer using a minimum of ink and paper and ensuring that everything is easy to read.

This article will explore how we can best create that second experience. We will take a look at how we should include print styles in our web pages, and look at the specifications that really come into their own once printing. We’ll find out about the state of browser support, and how to best test our print styles. I’ll then give you some pointers as to what to do when a print stylesheet isn’t enough for your printing needs.

Key Places For Print Support

If you still have not implemented any print styles on your site, there are a few key places where a solid print experience will be helpful to your users. For example, many users will want to print a transaction confirmation page after making a purchase or booking even if you will send details via email.

Any information that your visitor might want to use when away from their computer is also a good candidate for a print stylesheet. The most common thing that I print are recipes. I could load them up on my iPad but it is often more convenient to simply print the recipe to pop onto the fridge door while I cook. Other such examples might be directions or travel information. When traveling abroad and not always having access to data these printouts can be invaluable.

Reference materials of any sort are also often printed. For many people, being able to make notes on paper copies is the way they best learn. Again, it means the information is accessible in an offline format. It is easy for us to wonder why people want to print web pages, however, our job is often to make content accessible — in the best format for our visitors. If that best format is printed to paper, then who are we to argue?

Why Would This Page Be Printed?

A good question to ask when deciding on the content to include or hide in the print stylesheet is, “Why is the user printing this page?” Well, maybe there’s a recipe they’d like to follow while cooking in the kitchen or take along with them when shopping to buy ingredients. Or they’d like to print out a confirmation page after purchasing a ticket as proof of booking. Or perhaps they’d like a receipt or invoice to be printed (or printed to PDF) in order to store it in the accounts either as paper or electronically.

Considering the use of the printed document can help you to produce a print version of your content that is most useful in the context in which the user is in when referring to that print-out.

Workflow

Once we have decided to include print styles in our CSS, we need to add them to our workflow to ensure that when we make changes to the layout we also include those changes in the print version.

Adding Print Styles To A Page

To enable a “print stylesheet” what we are doing is telling the browser what these CSS rules are for when the document is printed. One method of doing this is to link an additional stylesheet by using the <link> element.

<link media="print" href="print.css">

This method does keep your print styles separate from everything else which you might consider to be tidier, however, that has downsides.

The linked stylesheet creates an additional request to the server. In addition, that nice, neat separation of print styles from other styles can have a downside. While you may take care to update the separate styles before going live, the stylesheet may find itself suffering due to being out of sight and therefore out of mind — ultimately becoming useless as features are added to the site but not reflected in the print styles.

The alternate method for including print styles is to use @media in the same way that you includes CSS for certain breakpoints in your responsive design. This method keeps all of the CSS together for a feature. Styles for narrow to wide breakpoints, and styles for print. Alongside Feature Queries with @supports, this encourages a way of development that ensures that all of the CSS for a design feature is kept and maintained together.

@media print 
    

Overwriting Screen CSS Or Creating Separate Rules

Much of the time you are likely to find that the CSS you use for the screen display works for print with a few small adjustments. Therefore you only need to write CSS for print, for changes to that basic CSS. You might overwrite a font size, or family, yet leave other elements in the CSS alone.

If you really want to have completely separate styles for print and start with a blank slate then you will need to wrap the rest of your site styles in a Media Query with the screen keyword.

@media screen 
    

On that note, if you are using Media Queries for your Responsive Design, then you may have written them for screen.

@media screen and (min-width: 500px) 
    

If you want these styles to be used when printing, then you should remove the screen keyword. In practice, however, I often find that if I work “mobile first” the single column mobile layout is a really good starting point for my print layout. By having the media queries that bring in the more complex layouts for screen only, I have far less overwriting of styles to do for print.

Add Your Print Styles To Your Pattern Libraries And Style Guides

To help ensure that your print styles are seen as an integral part of the site design, add them to your style guide or pattern library for the site if you have one. That way there is always a reminder that the print styles exist, and that any new pattern created will need to have an equivalent print version. In this way, you are giving the print styles visibility as a first-class citizen of your design system.

Basics Of CSS For Print

When it comes to creating the CSS for print, there are three things you are likely to find yourself doing. You will want to hide, and not display content which is irrelevant when printed. You may also want to add content to make a print version more useful. You might also want to adjust fonts or other elements of your page to optimize them for print. Let’s take a look at these techniques.

Hiding Content

In CSS the method to hide content and also prevent generation of boxes is to use the display property with a value of none.

.box 
  display: none;

Using display: none will collapse the element and all of its child elements. Therefore, if you have an image gallery marked up as a list, all you would need to do to hide this when printed is to set display: none on the ul.

Things that you might want to hide are images which would be unnecessary when printed, navigation, advertising panels and areas of the page which display links to related content and so on. Referring back to why a user might print the page can help you to decide what to remove.

Inserting Content

There might be some content that makes sense to display when the page is printed. You could have some content set to display: none in a screen stylesheet and show it in your print stylesheet. Additionally, however, you can use CSS to expose content not normally output to the screen. A good example of this would be the URL of a link in the document. In your screen document, a link would normally show the link text which can then be clicked to visit that new page or external website. When printed links cannot be followed, however, it might be useful if the reader could see the URL in case they wished to visit the link at a later time.

We achieve this by using CSS Generated Content. Generated Content gives you a way to insert content into your document via CSS. When printing, this becomes very useful.

You can insert a simple text string into your document. The next example targets the element with a class of wrapper and inserts before it the string, “Please see www.mysite.com for the latest version of this information.”

.wrapper::after 
  content: "Please see www.mysite.com for the latest version of this information.";

You can insert things that already exist in the document however, an example would be the content of the link href. We add Generated Content after each instance of a with an attribute of href and the content we insert is the value of the href attribute – which will be the link.

a[href]:after 
  content: " (" attr(href) ")";

You could use the newer CSS :not selector to exclude internal links if you wished.

a[href^="http"]:not([href*="example.com"]):after 
  content: " (" attr(href) ")";

There are some other useful tips like this in the article, “I Totally Forgot About Print Stylesheets”, written by Manuel Matuzovic.

Advanced Print Styling

If your printed version fits neatly onto one page then you should be able to create a print stylesheet relatively simply by using the techniques of the last section. However, once you have something which prints onto multiple pages (and particularly if it contains elements such as tables or figures), you may find that items break onto new pages in a suboptimal manner. You may also want to control things about the page itself, e.g. changing the margin size.

CSS does have a way to do these things, however, as we will see, browser support is patchy.

Paged Media

The CSS Paged Media Specification opens with the following description of its role.

“This CSS module specifies how pages are generated and laid out to hold fragmented content in a paged presentation. It adds functionality for controlling page margins, page size and orientation, and headers and footers, and extends generated content to enable page numbering and running headers/footers.”

The screen is continuous media; if there is more content, we scroll to see it. There is no concept of it being broken up into individual pages. As soon as we are printing we output to a fixed size page, described in the specification as paged media. The Paged Media specification doesn’t deal with how content is fragmented between pages, we will get to that later. Instead, it looks at the features of the pages themselves.

We need a way to target an individual page, and we do this by using the @page rule. This is used much like a regular selector, in that we target @page and then write CSS to be used by the page. A simple example would be to change the margin on all of the pages created when you print your document.

@page 
  margin: 20px;

You can target specific pages with :left and :right spread pseudo-class selectors. The first page can be targeted with the :first pseudo-class selector and blank pages caused by page breaks can be selected with :blank. For example, to set a top margin only on the first page:

@page :first 
  margin-top: 250pt;

To set a larger margin on the right side of a left-hand page and the left side of a right-hand page:

@page :left 
  margin-right: 200pt;

    
@page :right 
  margin-left: 200pt;

The specification defines being able to insert content into the margins created, however, no browser appears to support this feature. I describe this in my article about creating stylesheets for use with print-specific user agents, Designing For Print With CSS.

CSS Fragmentation

Where the Paged Media module deals with the page boxes themselves, the CSS Fragmentation Module details how content breaks between fragmentainers. A fragmentainer (or fragment container) is a container which contains a portion of a fragmented flow. This is a flow which, when it gets to a point where it would overflow, breaks into a new container.

The contexts in which you will encounter fragmentation currently are in paged media, therefore when printing, and also when using Multiple-column layout and your content breaks between column boxes. The Fragmentation specification defines various rules for breaking, CSS properties that give you some control over how content breaks into new fragments, in these contexts. It also defines how content breaks in the CSS Regions specification, although this isn’t something usable cross-browser right now.

And, speaking of browsers, fragmentation is a bit of a mess in terms of support at the moment. The browser compatibility tables for each property on MDN seem to be accurate as to support, however testing use of these properties carefully will be required.

Older Properties From CSS2

In addition to the break-* properties from CSS Fragmentation Level 3, we have page-break-* properties which came from CSS2. In spec terms, these have been superseded by the newer break-* properties, as these are more generic and can be used in the different contexts breaking happens. There isn’t much difference between a page and a multicol break. However, in terms of browser support, there is better browser support for the older properties. This means you may well need to use those at the current time to control breaking. Browsers that implement the newer properties are to alias the older ones rather than drop them.

In the examples that follow, I shall show both the new property and the old one where it exists.

break-before & break-after

These properties deal with breaks between boxes, and accept the following values, with the initial value being auto. The final four values do not apply to paged media, instead being for multicol and regions.

  • auto
  • avoid
  • avoid-page
  • page
  • left
  • right
  • recto
  • verso
  • avoid-column
  • column
  • avoid-region
  • region

The older properties of page-break-before and page-break-after accept a smaller range of values.

  • auto
  • always
  • avoid
  • left
  • right
  • inherit

To always cause a page break before an h2 element, you would use the following:

h2 
  break-before: page;

To avoid a paragraph being detached from the heading immediately preceding it:

h2, h3 
  break-after: avoid-page;

The older page-break-* property to always cause a page break before an h2:

h2 
  page-break-before: always;

To avoid a paragraph being detached from the heading immediately preceding it:

h2, h3
  page-break-after: avoid;

On MDN find information and usage examples for the properties:

break-inside

This property controls breaks inside boxes and accepts the values:

  • auto
  • avoid
  • avoid-page
  • avoid-column
  • avoid-region

As with the previous two properties, there is an aliased page-break-inside from CSS2, which accepts the values:

  • auto
  • avoid
  • inherit

For example, perhaps you have a figure or a table and you don’t want a half of it to end up on one page and the other half on another page.

figure 
  break-inside: avoid;

And when using the older property:

figure 
  page-break-inside: avoid;

On MDN:

Orphans And Widows

The Fragmentation specification also defines the properties orphans and widows. The orphans property defines how many lines can be left at the bottom of the first page when content such as a paragraph is broken between two pages. The widows property defines how many lines may be left at the top of the second page.

Therefore, in order to prevent ending up with a single line at the end of a page and a single line at the top the next page, you can use the following:

p 
  orphans: 2;
  widows: 2;

The widows and orphans properties are well supported (the missing browser implementation being Firefox).

On MDN:

box-decoration-break

The final property defined in the Fragmentation module is box-decoration-break. This property deals with whether borders, margins, and padding break or wrap the content. The values it accepts are:

  • slice
  • clone

For example, if my content area has a 10-pixel grey border and I print the content, then the default way that this will print is that the border will continue onto each page, however, it will only wrap at the end of the content. So we get a break before going to the next page and continuing.


The border does not wrap each page and so breaks between pages


The border does not wrap each page and so breaks between pages

If I use box-decoration-break: clone, the border and any padding and margin will complete on each page, thus giving each page a grey border.


The border wraps each individual page


The border wraps each individual page

Currently, this only works for Paged Media in Firefox, and you can find out more about box-decoration-break on MDN.

Browser Support

As already mentioned, browser support is patchy for Paged Media and Fragmentation. Where Fragmentation is concerned, an additional issue is that breaking has to be specified and implemented for each layout method. If you were hoping to use Flexbox or CSS Grid in print stylesheets, you will probably be disappointed. You can check out the Chrome bugs for Flexbox and for Grid.

The best suggestion I can give right now is to keep your print stylesheets reasonably simple. Add fragmentation properties — including both the old page-break-* properties as well as the new properties. However, accept that these may well not work in all browsers. And, if you find lack of browser support frustrating, raise these issues with browsers or vote for already raised issues. Fragmentation, in particular, should be treated as a suggestion rather than a command, even where it is supported. It would be possible to be so specific about where and when you want things to break that it is almost impossible to lay out the pages. You should assume that sometimes you may get suboptimal breaking.

Testing Print Stylesheets

Testing print stylesheets can be something of a bore, typically requiring using print preview or printing to a PDF repeatedly. However, browser DevTools have made this a little easier for us. Both Chrome and Firefox have a way to view the print styles only.

Firefox

Open the Developer Toolbar then type media emulate print at the prompt.


Typing media emulate print


Emulating print styles in Firefox

Chrome

Open DevTools, click on the three dots icon and then select “More Tools” and “Rendering”. You can then select print under Emulate CSS Media.


Chrome DevTools emulate print media


Emulating print styles in Chrome

This will only be helpful in testing changes to the CSS layout, hidden or generated content. It can’t help you with fragmentation — you will need to print or print to PDF for that. However, it will save you a few round trips to the printer and can help you check as you develop new parts of the site that you are still hiding and showing the correct things.

What To Do When A Print Stylesheet Isn’t Enough

In an ideal world, browsers would have implemented more of the Paged Media specification when printing direct from the browser, and fragmentation would be more thoroughly implemented in a consistent way. It is certainly worth raising the bugs that you find when printing from the browser with the browsers concerned. If we don’t request these things are fixed, they will remain low priority to be fixed.

If you do need to have a high level of print support and want to use CSS, then currently you would need to use a print-specific User Agent, such as Prince. I detail how you can use CSS to format books when outputting to Prince in my article “Designing For Print With CSS.”

Prince is also available to install on your server in order to generate nicely printed documents using CSS on the web, however, it comes at a high price. An alternative is a server like DocRaptor who offer an API on top of the Prince rendering engine.

There are open-source HTML- and CSS-to-PDF generators such as wkhtmltopdf, but most use browser rendering engines to create the print output and therefore have the same limitations as browsers when it comes to implementing the Paged Media and Fragmentation specifications. An exception is WeasyPrint which seems to have its own implementation and supports slightly different features, although is not in any way as full-featured as something like Prince.

You will find more information about user agents for print on the print-css.rocks site.

Other Resources

Due to the fact that printing from CSS has really moved on very little in the past few years, many older resources on Smashing Magazine and elsewhere are still valid. Some additional tips and tricks can be found in the following resources. If you have discovered a useful print workflow or technical tip, then add it to the comments below.

Smashing Editorial
(il)


Source:

A Guide To The State Of Print Stylesheets In 2018

Automating Your Feature Testing With Selenium WebDriver




Automating Your Feature Testing With Selenium WebDriver

Nils Schütte



This article is for web developers who wish to spend less time testing the front end of their web applications but still want to be confident that every feature works fine. It will save you time by automating repetitive online tasks with Selenium WebDriver. You will find a step-by-step example for automating and testing the login function of WordPress, but you can also adapt the example for any other login form.

What Is Selenium And How Can It Help You?

Selenium is a framework for the automated testing of web applications. Using Selenium, you can basically automate every task in your browser as if a real person were to execute the task. The interface used to send commands to the different browsers is called Selenium WebDriver. Implementations of this interface are available for every major browser, including Mozilla Firefox, Google Chrome and Internet Explorer.

Automating Your Feature Testing With Selenium WebDriver

Which type of web developer are you? Are you the disciplined type who tests all key features of your web application after each deployment. If so, you are probably annoyed by how much time this repetitive testing consumes. Or are you the type who just doesn’t bother with testing key features and always thinks, “I should test more, but I’d rather develop new stuff.” If so, you probably only find bugs by chance or when your client or boss complains about them.

I have been working for a well-known online retailer in Germany for quite a while, and I always belonged to the second category: It was so exciting to think of new features for the online shop, and I didn’t like at all going over all of the previous features again after each new software deployment. So, the strategy was more or less to hope that all key features would work.

One day, we had a serious drop in our conversion rate and started digging in our web analytics tools to find the source of this drop. It took quite a while before we found out that our checkout did not work properly since the previous software deployment.

This was the day when I started to do some research about automating our testing process of web applications, and I stumbled upon Selenium and its WebDriver. Selenium is basically a framework that allows you to automate web browsers. WebDriver is the name of the key interface that allows you to send commands to all major browsers (mobile and desktop) and work with them as a real user would.

Preparing The First Test With Selenium WebDriver

First, I was a little skeptical of whether Selenium would suit my needs because the framework is most commonly used in Java, and I am certainly not a Java expert. Later, I learned that being a Java expert is not necessary to take advantage of the power of the Selenium framework.

As a simple first test, I tested the login of one of my WordPress projects. Why WordPress? Just because using the WordPress login form is an example that everybody can follow more easily than if I were to refer to some custom web application.

What do you need to start using Selenium WebDriver? Because I decided to use the most common implementation of Selenium in Java, I needed to set up my little Java environment.

If you want to follow my example, you can use the Java environment of your choice. If you haven’t set one up yet, I suggest installing Eclipse and making sure you are able to run a simple “Hello world” script in Java.

Because I wanted to test the login in Chrome, I made sure that the Chrome browser was already installed on my machine. That’s all I did in preparation.

Downloading The ChromeDriver

All major browsers provide their own implementation of the WebDriver interface. Because I wanted to test the WordPress login in Chrome, I needed to get the WebDriver implementation of Chrome: ChromeDriver.

I extracted the ZIP archive and stored the executable file chromedriver.exe in a location that I could remember for later.

Setting Up Our Selenium Project In Eclipse

The steps I took in Eclipse are probably pretty basic to someone who works a lot with Java and Eclipse. But for those like me, who are not so familiar with this, I will go over the individual steps:

  1. Open Eclipse.
  2. Click the “New” icon.
    Creating a new project in Eclipse
    Creating a new project in Eclipse
  3. Choose the wizard to create a new “Java Project,” and click “Next.”
    Chosing the java-project wizard
    Choose the java-project wizard.
  4. Give your project a name, and click “Finish.”
    Eclipse project wizard
    The eclipse project wizard
  5. Now you should see your new Java project on the left side of the screen.
    Java project successfully created
    We successfully created a project to run the Selenium WebDriver.

Adding The Selenium Library To Our Project

Now we have our Java project, but Selenium is still missing. So, next, we need to bring the Selenium framework into our Java project. Here are the steps I took:

  1. Download the latest version of the Java Selenium library.

    Downloading the Selenium library
    Download the Selenium library.
  2. Extract the archive, and store the folder in a place you can remember easily.
  3. Go back to Eclipse, and go to “Project” → “Properties.”
    Eclipse Properties
    Go to properties to integrate the Selenium WebDriver in you project.
  4. In the dialog, go to “Java Build Path” and then to register “Libraries.”
  5. Click on “Add External JARs.”
    Adding the Selenium lib to your Java build path.
    Add the Selenium lib to your Java build path.
  6. Navigate to the just downloaded folder with the Selenium library. Highlight all .jar files and click “Open.”
    Selecting the correct files of the Selenium lib.
    Select all files of the lib to add to your project.
  7. Repeat this for all .jar files in the subfolder libs as well.
  8. Eventually, you should see all .jar files in the libraries of your project:
    Selenium WebDriver framework successfully integrated into your project
    The Selenium WebDriver framework has now been successfully integrated into your project!

That’s it! Everything we’ve done until now is a one-time task. You could use this project now for all of your different tests, and you wouldn’t need to do the whole setup process for every test case again. Kind of neat, isn’t it?

Creating Our Testing Class And Letting It Open the Chrome Browser

Now we have our Selenium project, but what next? To see whether it works at all, I wanted to try something really simple, like just opening my Chrome browser.

To do this, I needed to create a new Java class from which I could execute my first test case. Into this executable class, I copied a few Java code lines, and believe it or not, it worked! Magically, the Chrome browser opened and, after a few seconds, closed all by itself.

Try it yourself:

  1. Click on the “New” button again (while you are in your new project’s folder).
    New class in eclipse
    Create a new class to run the Selenium WebDriver.
  2. Choose the “Class” wizard, and click “Next.”
    New class wizard in eclipse
    Choose the Java class wizard to create a new class.
  3. Name your class (for example, “RunTest”), and click “Finish.”
    Eclipse Java Class wizard
    The eclipse Java Class wizard.
  4. Replace all code in your new class with the following code. The only thing you need to change is the path to chromedriver.exe on your computer:
    import org.openqa.selenium.WebDriver;
    import org.openqa.selenium.chrome.ChromeDriver;
    
    /**
     * @author Nils Schuette via frontendtest.org
     */
    public class RunTest 
        static WebDriver webDriver;
        /**
         * @param args
         * @throws InterruptedException
         */
        public static void main(final String[] args) throws InterruptedException 
            // Telling the system where to find the chrome driver
            System.setProperty(
                    "webdriver.chrome.driver",
                    "C:/PATH/TO/chromedriver.exe");
    
            // Open the Chrome browser
            webDriver = new ChromeDriver();
    
            // Waiting a bit before closing
            Thread.sleep(7000);
    
            // Closing the browser and WebDriver
            webDriver.close();
            webDriver.quit();
        
    }
    
  5. Save your file, and click on the play button to run your code.
    Run Eclipse project
    Running your first Selenium WebDriver project.
  6. If you have done everything correctly, the code should open a new instance of the Chrome browser and close it shortly thereafter.
    Chrome Browser blank window
    The Chrome Browser opens itself magically. (Large preview)

Testing The WordPress Admin Login

Now I was optimistic that I could automate my first little feature test. I wanted the browser to navigate to one of my WordPress projects, login to the admin area and verify that the login was successful. So, what commands did I need to look up?

  1. Navigate to the login form,
  2. Locate the input fields,
  3. Type the username and password into the input fields,
  4. Hit the login button,
  5. Compare the current page’s headline to see if the login was successful.

Again, after I had done all the necessary updates to my code and clicked on the run button in Eclipse, my browser started to magically work itself through the WordPress login. I successfully ran my first automated website test!

If you want to try this yourself, replace all of the code of your Java class with the following. I will go through the code in detail afterwards. Before executing the code, you must replace four values with your own:

  1. The location of your chromedriver.exe file (as above),

  2. The URL of the WordPress admin account that you want to test,

  3. The WordPress username,

  4. The WordPress password.

Then, save and let it run again. It will open Chrome, navigate to the login of your WordPress website, login and check whether the h1 headline of the current page is “Dashboard.”

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;

/**
 * @author Nils Schuette via frontendtest.org
 */
public class RunTest 
    static WebDriver webDriver;
    /**
     * @param args
     * @throws InterruptedException
     */
    public static void main(final String[] args) throws InterruptedException 
        // Telling the system where to find the chrome driver
        System.setProperty(
                "webdriver.chrome.driver",
                "C:/PATH/TO/chromedriver.exe");

        // Open the Chrome browser
        webDriver = new ChromeDriver();

        // Maximize the browser window
        webDriver.manage().window().maximize();

        if (testWordpresslogin()) 
            System.out.println("Test WordPress Login: Passed");
         else 
            System.out.println("Test WordPress Login: Failed");

        

        // Close the browser and WebDriver
        webDriver.close();
        webDriver.quit();
    }

    private static boolean testWordpresslogin() 
        try 
            // Open google.com
            webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/");

            // Type in the username
            webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME");

            // Type in the password
            webDriver.findElement(By.id("user_pass")).sendKeys("YOUR_PASSWORD");

            // Click the Submit button
            webDriver.findElement(By.id("wp-submit")).click();

            // Wait a little bit (7000 milliseconds)
            Thread.sleep(7000);

            // Check whether the h1 equals “Dashboard”
            if (webDriver.findElement(By.tagName("h1")).getText()
                    .equals("Dashboard")) 
                return true;
             else 
                return false;
            

        // If anything goes wrong, return false.
        } catch (final Exception e) 
            System.out.println(e.getClass().toString());
            return false;
        
    }
}

If you have done everything correctly, your output in the Eclipse console should look something like this:

Eclipse console test result.
The Eclipse console states that our first test has passed. (Large preview)

Understanding The Code

Because you are probably a web developer and have at least a basic understanding of other programming languages, I am sure you already grasp the basic idea of the code: We have created a separate method, testWordpressLogin, for the specific test case that is called from our main method.

Depending on whether the method returns true or false, you will get an output in your console telling you whether this specific test passed or failed.

This is not necessary, but this way you can easily add many more test cases to this class and still have readable code.

Now, step by step, here is what happens in our little program:

  1. First, we tell our program where it can find the specific WebDriver for Chrome.
    System.setProperty("webdriver.chrome.driver","C:/PATH/TO/chromedriver.exe");
  2. We open the Chrome browser and maximize the browser window.
    webDriver = new ChromeDriver();
    webDriver.manage().window().maximize();
  3. This is where we jump into our submethod and check whether it returns true or false.
    if (testWordpresslogin()) …
  4. The following part in our submethod might not be intuitive to understand:
    The try…catch… blocks. If everything goes as expected, only the code in try… will be executed, but if anything goes wrong while executing try…, then the execution continuous in catch{}. Whenever you try to locate an element with findElement and the browser is not able to locate this element, it will throw an exception and execute the code in catch…. In my example, the test will be marked as “failed” whenever something goes wrong and the catch{} is executed.
  5. In the submethod, we start by navigating to our WordPress admin area and locating the fields for the username and the password by looking for their IDs. Also, we type the given values in these fields.
    webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/");
    webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME");
    webDriver.findElement(By.id("user_pass")).sendKeys("YOUR_PASSWORD");

    Wordpress login form
    Selenium fills out our login form
  6. After filling in the login form, we locate the submit button by its ID and click it.
    webDriver.findElement(By.id("wp-submit")).click();
  7. In order to follow the test visually, I include a 7-second pause here (7000 milliseconds = 7 seconds).
    Thread.sleep(7000);
  8. If the login is successful, the h1 headline of the current page should now be “Dashboard,” referring to the WordPress admin area. Because the h1 headline should exist only once on every page, I have used the tag name here to locate the element. In most other cases, the tag name is not a good locator because an HTML tag name is rarely unique on a web page. After locating the h1, we extract the text of the element with getText() and check whether it is equal to the string “Dashboard.” If the login is not successful, we would not find “Dashboard” as the current h1. Therefore, I’ve decided to use the h1 to check whether the login is successful.
    if (webDriver.findElement(By.tagName("h1")).getText().equals("Dashboard")) 
        
            return true;
         else 
            return false;
        
    

    Wordpress Dashboard
    Letting the WebDriver check, whether we have arrived on the Dashboard: Test passed! (Large preview)
  9. If anything has gone wrong in the previous part of the submethod, the program would have jumped directly to the following part. The catch block will print the type of exception that happened to the console and afterwards return false to the main method.
    catch (final Exception e) 
                System.out.println(e.getClass().toString());
                return false;
            

Adapting The Test Case

This is where it gets interesting if you want to adapt and add test cases of your own. You can see that we always call methods of the webDriver object to do something with the Chrome browser.

First, we maximize the window:

webDriver.manage().window().maximize();

Then, in a separate method, we navigate to our WordPress admin area:

webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/");

There are other methods of the webDriver object we can use. Besides the two above, you will probably use this one a lot:

webDriver.findElement(By. …)

The findElement method helps us find different elements in the DOM. There are different options to find elements:

  • By.id
  • By.cssSelector
  • By.className
  • By.linkText
  • By.name
  • By.xpath

If possible, I recommend using By.id because the ID of an element should always be unique (unlike, for example, the className), and it is usually not affected if the structure of your DOM changes (unlike, say, the xPath).

Note: You can read more about the different options for locating elements with WebDriver over here.

As soon as you get ahold of an element using the findElement method, you can call the different available methods of the element. The most common ones are sendKeys, click and getText.

We’re using sendKeys to fill in the login form:

webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME");

We have used click to submit the login form by clicking on the submit button:

webDriver.findElement(By.id("wp-submit")).click();

And getText has been used to check what text is in the h1 after the submit button is clicked:

webDriver.findElement(By.tagName("h1")).getText()

Note: Be sure to check out all the available methods that you can use with an element.

Conclusion

Ever since I discovered the power of Selenium WebDriver, my life as a web developer has changed. I simply love it. The deeper I dive into the framework, the more possibilities I discover — running one test simultaneously in Chrome, Internet Explorer and Firefox or even on my smartphone, or taking screenshots automatically of different pages and comparing them. Today, I use Selenium WebDriver not only for testing purposes, but also to automate repetitive tasks on the web. Whenever I see an opportunity to automate my work on the web, I simply copy my initial WebDriver project and adapt it to the next task.

If you think that Selenium WebDriver is for you, I recommend looking at Selenium’s documentation to find out about all of the possibilities of Selenium (such as running tasks simultaneously on several (mobile) devices with Selenium Grid).

I look forward to hearing whether you find WebDriver as useful as I do!

Smashing Editorial
(rb, ra, al, il)


Original link: 

Automating Your Feature Testing With Selenium WebDriver

The Rise Of Intelligent Conversational UI




The Rise Of Intelligent Conversational UI

Burke Holland



For a long time, we’ve thought of interfaces strictly in a visual sense: buttons, dropdown lists, sliders, carousels (please no more carousels). But now we are staring into a future composed not just of visual interfaces, but of conversational ones as well. Microsoft alone reports that three thousand new bots are built every week on their bot framework. Every. Week.

The importance of Conversational UI cannot be understated, even if some of us wish it wasn’t happening.

The most important advancement in Conversational UI has been Natural Language Processing (NLP). This is the field of computing that deals not with deciphering the exact words that a user said, but with parsing out of it their actual intent. If the bot is the interface, NLP is the brain. In this article, we’re going to take a look at why NLP is so important, and how you (yes, you!) can build your own.

Speech Recognition vs. NLP

Most people will be familiar with Amazon Echo, Cortana, Siri or Google Home, all of which have an interface that is primarily conversational. They are also all using NLP.




Large preview

Aside from these intelligent assistants, most Conversational UIs have nothing to do with voice at all. They are text driven. These are the bots we chat with in Slack, Facebook Messenger or over SMS. They deliver high quality gifs in our chats, watch our build processes and even manage our pull requests.




Large preview

Conversational UIs built on text are nice because there is no speech recognition component. The text is already parsed.

When it comes to a verbal interaction, the fundamental problem is not recognizing the speech. We’ve mostly got that one down.

OK, so maybe it’s not perfect. I still get voicemails every day like a game of Mad Libs that I never asked to play. iOS just sticks a blank line in whenever they don’t know what exactly was said.




Large preview

Google, on the other hand, just tries to guess. Like this one from my father. I have absolutely no idea what this message is actually trying to say other than “Be Safe” which honestly sounds like my mom, and not my dad. I have a hard time believing he ever said that. I don’t trust the computer.




Large preview

I’m picking on voice mail transcriptions here, which might be the hardest speech recognition to do given how degraded the audio quality is.

Nevertheless, speech recognition is largely a solved problem. It’s even built right into Chrome and it works remarkably well.




Large preview

After we solved the problem of speech recognition, we started to use it everywhere. That was unfortunate because speech recognition on it’s own doesn’t do us a whole lot of good. Interfaces that rely soley on speech recognition require the user to state things a precise way and they can only state the limited number of exact words or phrases that the interface knows about. This is not natural. This is not how a conversation works.

Without NLP, Conversational UI can be true nightmare.

Conversational UI Without NLP

We’re probably all familiar with automated phone menus. These are known as Interactive Voice Response systems — or IVRs for short. They are designed to take the place of the traditional operator and automatically transfer callers to the right place without having to talk to a human. On the surface, this seems like a good idea. In practice, it’s mostly just you waiting while a recorded voice reads out a list of menu items that “may have changed.”




Large preview

A 2011 study from New York University found that 83% of people feel IVR systems “provide either no benefit at all, or only a cost savings benefit to the company.” They also noted that IVR systems “score lower than any other service option.” People would literally rather do anything else than use an automated phone menu.

NLP has changed the IVR market rather significantly in the past few years. NLP can pick a user’s intent out of anything they say, so it’s better to just let them say it and then determine if you support the action.

Check out how AT&T does it.

AT&T has a truly intelligent Conversational UI. It uses NLP to let me just state my intent. Also, notice that I don’t have to know what to say. I can fumble all around and it still picks out my intent.

AT&T also uses information that it already has (my phone number) and then leverages text messaging to send me a link to a traditional visual UI, which is probably a much better UX for making a payment. NLP drives the whole experience here. Without it, the rest of the interaction would not be nearly as smooth.

NLP is powerful, but more importantly, it is also accessible to developers everywhere. You don’t have to know a thing about Machine Learning (ML) or Artificial Intelligence (AI) to use it. All you need to how to do is make an AJAX call. Even I can do that!

Building An NLP Interface

So much of Machine Learning still remains inaccessible to developers. Even the best YouTube videos on the subject quickly become hard to follow with subjects like Neural Networks and Gradient Descents. We have, however, made significant progress in the field of Language Processing, to the point that it’s accessible to developers of nearly any skill level.

Natural Language Processing differs based on the service, but the overall idea is that the user has an intent, and that intent contains entities. That means exactly nothing to you at the moment, so let’s work up a hypothetical Home Automation bot and see how this works.

The Home Automation Example

In the field of Natural Language Processing, the canonical “Hello World” is usually a Home Automation demo. This is because it helps to clearly demonstrate the fundamental concepts of NLP without overloading your brain.

A Home Automation Bot is a service that can control hypothetical lights in a hypothetical house. For instance, we might want to say “Turn on the kitchen lights”. That is our intent. If we said “Hello”, we are clearly expressing a different intent. Inside of that intent, there are two pieces of information that we need to complete the action:

  1. The ‘Location’ of the light (kitchen)
  2. The desired state of the lights ‘Power’ (on/off)

These (Location, Power) are known as entities.

When we are finished designing our NLP interface, we are going to be able to call an HTTP endpoint and pass it our intent: “Turn on the kitchen lights.” That endpoint will return to us the intent (Control Lights) and two objects representing our entities: Location and Power. We can then pass those into a function which actually controls our lights…

function controlLights(location, power) 
  console.log(`Turning $power the $location lights`);
  
  // TODO: Call an imaginary endpoint which controls lights   
}

There are a lot of NLP services out there that are available today for developers. For this example, I’m going to show the LUIS project from Microsoft because it is free to use.

LUIS is a completely visual tool, so we won’t actually be writing any code at all. We’ve already talked about Intents and Entities, so you already know most of the terminology that you need to know to build this interface.

The first step is to create a “Control Lights” intent in LUIS.




Large preview

Before I do anything with that intent, I need to define my Location and Power entities. Entities can be different types — kind of like types in a programming language. You can have dates, lists and even entities that are related to other entities. In this case, Power is a list of values (on, off) and Location is a simple entity, which can be any value.

It will be up to LUIS to be smart enough to figure out exactly what the Location is.




Large preview




Large preview

Now we can begin to train this model to understand all of the different ways that we might ask it to control the lights in a different location. Let’s think of all the different ways that we could do that:

  • Turn off the kitchen lights;
  • Turn off the lights in the office;
  • The lights in the living room, turn them on;
  • Lights, kitchen, off;
  • Turn off the lights (no location).

As I feed these into the Control Lights intent as utterances, LUIS tries to determine where in the intent the entities are. You can see that because Power is a discreet list of values, it gets that right every time.




Large preview

But it has no idea what a Location even is. LUIS wants us to go through this list and tell it where the Location is. That’s done by clicking on a word or group of words and assigning to the right entity. As we are doing this, we are really creating a machine learning model that LUIS is going to use to statistically estimate what qualifies as a Location.




Large preview

When I’m done telling LUIS where in these utterances all the locations are, my dashboard looks like this…




Large preview

Now we train the model by clicking on the “Train” button at the top. Do you feel like a data scientist yet?

Now I can test it using the test panel. You can see that LUIS is already pretty smart. The Power is easy to pick out, but it can actually pick out Locations it has never seen before. It’s doing what your brain does — using the information that it has to make an educated guess. Machine Learning is equal parts impressive and scary.




Large preview

If we try hard enough, we can fool the AI. The more utterances we give it and label, the smarter it will get. I added 35 utterances to mine before I was done and it is close to bullet proof.

So now we get to the important part, which is how we actually use this NLP in an app. LUIS has a “Publish” menu option which allows us to publish our model to the internet where it’s exposed via a single HTTP endpoint. It will look something like this…

https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/c4396135-ee3f-40a9-8b83-4704cddabf7a?subscription-key=19d29a12d3fc4d9084146b466638e62a&verbose=true&timezoneOffset=0&q=

The very last part of that query string is a q= variable. This is where we would pass our intent.

https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/c4396135-ee3f-40a9-8b83-4704cddabf7a?subscription-key=19d29a12d3fc4d9084146b466638e62a&verbose=true&timezoneOffset=0&q=turn on the kitchen lights

The response that we get back looks is just a JSON object.


  "query": "turn on the kitchen lights",
  "topScoringIntent": 
    "intent": "Control Lights",
    "score": 0.999999046
  ,
  "intents": [
    
      "intent": "Control Lights",
      "score": 0.999999046
    ,
    
      "intent": "None",
      "score": 0.0532306843
    
  ],
  "entities": [
    
      "entity": "kitchen",
      "type": "Location",
      "startIndex": 12,
      "endIndex": 18,
      "score": 0.9516622
    ,
    
      "entity": "on",
      "type": "Power",
      "startIndex": 5,
      "endIndex": 6,
      "resolution": 
        "values": [
          "on"
        ]
      
    }
  ]
}

Now this is something that we can work with as developers! This is how you add NLP to any project — with a single REST endpoint. Now you’re free to create a bot with some real brains!

Brian Holt used the browser speech API and a LUIS model to create a voice powered calculator that is running right inside of CodePen. Chrome is required for the speech API.

See the Pen Voice Calculator by Brian Holt (@btholt) on CodePen.

Bot Design Is Still Hard

Having a smart bot is only half the battle. We still need to account for any of the actions that our system might expose, and that can lead to a lot of different logical paths which makes for messy code.

Conversations also happen in stages, so the bot needs to be able to intelligently direct users down the right path without frustrating them or being unable to recover when something goes wrong. It needs to be able to recover when the conversation dies midstream and then starts again. That’s a whole other article and I’ve included some resources below to help.

When it comes to language understanding, the AI platforms are mature and ready to use today. While that won’t help you perfectly design your bot, it will be a key component to building a bot that people don’t hate.

Great UI Is Just Great UI

A final note: As we saw from the AT&T example, a truly smart interface combines great speech recognition, Natural Language Processing, different types of conversational UI (speech and text) and even a visual UI. In short, great UI is just that — great UI — and it is not a zero sum game. Great UIs will leverage all of the technology available to provide the best possible user experience.

Special thanks to Mat Velloso for his input on this article.

Further Resources:

Smashing Editorial
(rb, ra, yk, il)


Visit site: 

The Rise Of Intelligent Conversational UI

Understanding Logical Properties And Values

In the past, CSS has tied itself to physical dimensions and directions, physically mapping the placement of elements to the left, right and top and bottom. We float an element left or right, we use the positioning offset properties top, left, bottom and right. We set margins, padding, and borders as margin-top and padding-left. These physical properties and values make sense if you are working in a horizontal, top to bottom, left to right writing mode and direction.

More – 

Understanding Logical Properties And Values