Tag Archives: following

Thumbnail

Building A PWA Using Angular 6




Building A PWA Using Angular 6

Ahmed Bouchefra



In this tutorial, we’ll be using the latest Angular 6 to build a PWA by implementing the core tenets that make a PWA. We’ll start by creating a front-end web application that consumes a JSON API. For this matter, we’ll be using the Angular HttpClient module to send HTTP requests to a statically JSON API generated from the Simplified JavaScript Jargon GitHub repository. We’ll also use Material Design for building the UI via the Angular Material package.

Next, we’ll use the “Audits” panel (Lighthouse) from Chrome DevTools to analyze our web application against the core tenets of PWAs. Finally, we’ll explain and add the PWA features to our web application according to the “Progressive Web App” section in the Lighthouse report.

Before we start implementing our PWA, let’s first introduce PWAs and Lighthouse.

Recommended reading: Native And PWA: Choices, Not Challengers!

What’s A PWA?

A Progressive Web App or PWA is a web application that has a set of capabilities (similar to native apps) which provide an app-like experience to users. PWAs need to meet a set of essential requirements that we’ll see next. PWAs are similar to native apps but are deployed and accessible from web servers via URLs, so we don’t need to go through app stores.

A PWA needs to be:

  • Progressive
    Work for every user, regardless of browser choice, because they are built with progressive enhancement as a core tenet.
  • Responsive
    Fit any form factor, desktop, mobile, tablet, or whatever is next.
  • Connectivity independent
    Enhanced with service workers to work offline or on low-quality networks.
  • App-like
    Use the app-shell model to provide app-style navigation and interactions.
  • Fresh
    Always up-to-date thanks to the service worker update process.
  • Safe
    Served via HTTPS to prevent snooping and ensure content has not been tampered with.
  • Discoverable
    Are identifiable as “applications” thanks to W3C manifests and service worker registration scope allowing search engines to find them.
  • Re-engageable
    Make re-engagement easy through features like push notifications.
  • Installable
    Allow users to “keep” apps they find most useful on their home screen without the hassle of an app store.
  • Linkable
    Easily share via URL and not require complex installation.

Introducing Lighthouse

Lighthouse is an open-source auditing tool created by Google which can be used to audit websites and applications for accessibility performance, SEO, best practices and PWA features.

You can access Lighthouse from the Audit tab in Chrome DevTools as a module in Node.js or as a CLI tool. You can use Lighthouse by providing an URL and then running the audits which will provide you with a report containing the auditing results which are basically suggestions on how you can improve your web application.

Installing Angular CLI v6 And Generating A Project

In this section, we’ll install the latest version of Angular CLI then we’ll use it to create a new Angular 6 project.

Angular CLI requires Node.js >= 8.9+ so first make sure you have the required version installed by running the following command:

$ node -v

Node.js version


Checking Node version. (Large preview)

In case you don’t have Node.js installed, you can simply head on to the official Node download page and grab the Node binaries for your system.

Now, you can go ahead and install the latest version of Angular CLI by running:

$ npm install -g @angular/cli 

Note: Depending on your npm configuration, you may need to add _sudo_ to install packages globally.

You can generate your Angular 6 project by running the following command in your terminal:

$ ng new pwademo

This will create a project with a structure that looks like:


Angular project structure


Angular project structure. (Large preview)

Most work that’s done will be inside the src/ folder that contains the source code of the application.

Creating The Angular Application

After generating a project, we’ll build a web application that consumes a JSON API and displays the items on the home page. We’ll use the HttpClient service for sending HTTP requests and Angular Material for building the UI.

Adding Angular Material

Thanks to Angular CLI v6 and the new ng add command, adding Angular Material to your project is only one command away. You just need to run the following command from your terminal:

$ cd pwademo
$ ng add @angular/material

Adding Angular Material


Adding Angular Material. (Large preview)

You can see from the screenshot that the command installs the required package from npm and update a bunch of files for setting up Angular Material in your project which previously needed manual updates.

Setting Up HttpClient And Consuming The JSON API

Now, let’s setup the Angular project to use HttpClient for sending HTTP requests. First, you need to import the HttpClientModule module in the main application module in the src/app/app.module.ts file:

/*...*/
import  HttpClientModule  from  '@angular/common/http';
@NgModule(
declarations: [
AppComponent
],
imports: [
/*...*/
HttpClientModule
],
providers: [],
bootstrap: [AppComponent]
)
export  class  AppModule  

That’s it. We can now inject and use HttpClient in any component or service that belongs to the main module.

For demo purposes, we’ll consume a statically generated JSON API from the Simplified JavaScript Jargon GitHub repository. If you are consuming any other resource, make sure you have CORS enabled so the browser doesn’t disallow reading the remote resource due to the Same Origin Policy.

Let’s create a service that interfaces with the API. Inside your project folder, run:

$ ng g service api

This will create a service called ApiService in the src/app/api.service.ts file.

Now open the src/app/api.service.ts file and update it to reflect the following changes:

import  Injectable  from  '@angular/core';
import  HttpClient  from  '@angular/common/http';
import  Observable  from  'rxjs';

export  interface  Item
name:  string;
description:  string;
url:  string;
html:  string;
markdown:  string;


@Injectable(
providedIn:  'root'
)

export  class  ApiService 
private  dataURL:  string  =  "https://www.techiediaries.com/api/data.json";
constructor(private  httpClient:  HttpClient) 
fetch():  Observable<Item[]>
return <Observable<Item[]>this.httpClient.get(this.dataURL);

}

We first imported the HttpClient and Observable classes then injected the HttpClient in the constructor as httpClient and added a fetch() method which calls the get() method of HttpClient (for sending an HTTP GET request to our JSON endpoint) and returns an Observable that we can subscribe to later.

We also declared an Item interface which represents a single item of the returned JSON data.

Next import this service from the application component. Open the src/app/app.component.ts file and add:

import  Component, OnInit  from  '@angular/core';
import  ApiService  from  './api.service';
import  Item  from  './api.service';

@Component(
selector:  'app-root',
templateUrl:  './app.component.html',
styleUrls: ['./app.component.css']
)
export  class  AppComponent  implements  OnInit
title  =  'pwademo';
items:  Array<Item>;
constructor(private  apiService:  ApiService)

ngOnInit()
this.fetchData();

fetchData()
this.apiService.fetch().subscribe((data:  Array<Item>)=>
console.log(data);
this.items  =  data;
, (err)=>
console.log(err);
);
}
}

We import the ApiService that we created before and we inject it as apiService, we also import the Item class which represents a single item of our JSON data and we declare the items variable of type Array<Item> which will hold the fetched items.

Next, we add a fetchData() method which calls our fetch() method that we defined in the ApiService which returns an Observable. We simply subscribe to this observable in order to send a GET request to our JSON endpoint and get the response data that we finally assign to the items array.

We call the fetchData() method in the ngOnInit() life-cycle event so it will be called once the AppComponent component is initialized.

Adding The Application UI

Our application UI will consist of a navigation bar and the skeleton of the page which will be created with Angular Material.

Before using an Angular Material component, you’ll need to import its module. Each Material component belongs to its own module.

Open the src/app/app.module.ts file and add the following imports:

/*...*/
import  MatToolbarModule  from  '@angular/material/toolbar';
import  MatCardModule  from  '@angular/material/card';
import  MatButtonModule  from  '@angular/material/button';

@NgModule(
declarations: [
AppComponent
],
imports: [
/*...*/
MatToolbarModule,
MatCardModule,
MatButtonModule
],
providers: [],
bootstrap: [AppComponent]
)
export  class  AppModule  

We import modules for toolbar, card and button components and we add them to the imports array of the AppModule.

Next, open the src/app/app.component.html file, delete what’s in there and add:

<mat-toolbar  color="primary">
<mat-toolbar-row>
<span>JS-jargon</span>
</mat-toolbar-row>
</mat-toolbar>
<main>
<mat-card *ngFor="let item of items">
<mat-card-header>
<mat-card-title>item.name}</mat-card-title>
</mat-card-header>
<mat-card-content>
item.description}
</mat-card-content>
<mat-card-actions>
<a  mat-raised-button  href="item.url}"  color="primary">More</a>
</mat-card-actions>
</mat-card>
</main>

We use Material components to create the UI. The <mat-toolbar> component is used to create a Material toolbar and the <mat-card> component is used to create a Material card etc.

We iterate over the items array which gets populated by the fetchData() method when the component is initialized, and display items as Material cards. Each card contains the name, description and a link for more information (The link is styled as a Material button using the mat-raised-button directive).

This is a screenshot of the application:


Demo Application


Demo Application. (Large preview)

Building The Application For Production

Typically, when checking your application for PWA features you should first build it for production because most PWA features are not added in development. For example, you don’t want to have service workers and caching enabled in development since you will periodically need to update the files.

Let’s build the application for production using the following command:

$ ng build --prod

The production build will be available from the dist/pwademo folder. We can use a tool like http-server to serve it.

First, install http-server using the following command:

$ npm i -g http-server

You can then run it using the following command:

$ cd dist/pwademo
$ http-server -o

The -o option will automatically open the default browser in your system and navigate to the http://127.0.0.1:8080/ address where our web application is available.

Analyzing The Application Using Lighthouse

Let’s now analyze our application using Lighthouse. First, launch Chrome and visit our application address http://127.0.0.1:8080/.

Next, open Developer Tools or press Ctrl + Shift + I and click on the Audit panel.


Perform an audit


Perform an audit. (Large preview)

You preferably need to set the Emulation to Mobile instead of Desktop to emulate a mobile environment. Next, click on Perform an audit… blue button. You’ll have a dialog opened in which you need to choose the types of the audits you want to perform against your web application. Un-check all types but Progressive Web App and click the Run audit button.


Progressive Web App Audits


Progressive Web App Audits. (Large preview)

Wait for the Lighthouse to generate the report. This is a screenshot of the result at this stage:


Initial PWA Report


Initial Report. (Large preview)

Lighthouse performs a series of checks which validate the aspects of a Progressive Web App specified by the PWA Checklist.
We get an initial score of 36100 that’s because we have some audits passed.

Our application has 7 failed audits mainly related to Service Workers, Progressive Enhancement, HTTPS and Web App Manifest which are the core aspects of a PWA.

Registering A Service Worker

The first two failed audits (“Does not register a service worker” and “Does not respond with a 200 when offline”) are related to Service Workers and caching. So what’s a service worker?

A service worker is a feature that’s available on modern browsers which can be used as a network proxy that lets your application intercept network requests to cache assets and data. This could be used for implementing PWA features such as offline support and Push notifications etc.

To pass these audits we simply need to register a service worker and use it to cache files locally. When offline, the SW should return the locally cached version of the file. We’ll see a bit later how to add that with one CLI command.

Recommended reading: Making A Service Worker: A Case Study

Progressive Enhancement

The third failed audit (“Does not provide fallback content when JavaScript is not available”) is related to Progressive Enhancement which is an essential aspect of a PWA and It simply refers to the capability of PWAs to run on different browsers but provide advanced features if they’re available. One simple example of PE is the use of the <noscript> HTML tag that informs users of the need to enable JavaScript to run the application in case It’s not enabled:

<noscript>
Please enable JavaScript to run this application.
</noscript>

HTTPS

The fourth failed audit (“Does not redirect HTTP traffic to HTTPS”) is related to HTTPS which is also a core aspect of PWAs (service workers can be only served from secure origins, except for localhost). The “Uses HTTPS” audit itself is considered as passed by Lighthouse since we’re auditing localhost but once you use an actual host you need a SSL certificate. You can get a free SSL certificate from different services such as Let’s Encrypt, Cloudflare, Firebase or Netlify etc.

The Web App Manifest

The three failed audits (“User will not be prompted to Install the Web App”, “Is not configured for a custom Splash Screen” and “Address bar does not match brand colors”) are related to a missing Web App Manifest which is a file in JSON format that provides the name, description, icons and other information required by a PWA. It lets users install the web app on the home screen just like native apps without going through an app store.

You need to provide a web app manifest and reference it from the index.html file using a <link> tag with rel property set to manifest. We’ll see next how we can do that automatically with one CLI command.

Implementing PWA Features

Angular CLI v6 allows you to quickly add PWA features to an existing Angular application. You can turn your application into a PWA by simply running the following command in your terminal from the root of the project:

$ ng add @angular/pwa

The command automatically adds PWA features to our Angular application, such as:

  • A manifest.json file,
  • Different sizes of icons in the src/assets/icons folder,
  • The ngsw-worker.js service worker.

Open the dist/ folder which contains the production build. You’ll find various files but let’s concentrate on the files related to PWA features that we mentioned above:

A manifest.json file was added with the following content:


    "name": "pwademo",
    "short_name": "pwademo",
    "theme_color": "#1976d2",
    "background_color": "#fafafa",
    "display": "standalone",
    "scope": "/",
    "start_url": "/",
    "icons": [
        
        "src": "assets/icons/icon-72x72.png",
        "sizes": "72x72",
        "type": "image/png"
    ,
    
        "src": "assets/icons/icon-96x96.png",
        "sizes": "96x96",
        "type": "image/png"
    ,
    
        "src": "assets/icons/icon-128x128.png",
        "sizes": "128x128",
        "type": "image/png"
    ,
    
        "src": "assets/icons/icon-144x144.png",
        "sizes": "144x144",
        "type": "image/png"
    ,
    
        "src": "assets/icons/icon-152x152.png",
        "sizes": "152x152",
        "type": "image/png"
    ,
    
        "src": "assets/icons/icon-192x192.png",
        "sizes": "192x192",
        "type": "image/png"
    ,
    
        "src": "assets/icons/icon-384x384.png",
        "sizes": "384x384",
        "type": "image/png"
    ,
    
        "src": "assets/icons/icon-512x512.png",
        "sizes": "512x512",
        "type": "image/png"
    
    ]
}

As you can see, the added manifest.json file has all the information required by a PWA such as the name, description and start_url etc.


Angular project structure


Angular project structure. (Large preview)

The manifest.json file, links to icons with different sizes, that were also added automatically in the assets/icons folder. You will, of course, need to change those icons with your own once you are ready to build the final version of your PWA.


Angular project structure


Angular project structure. (Large preview)

In the index.html file, the manifest.json file is referenced using:

<link  rel="manifest"  href="manifest.json">

The ngsw-worker.js file, was also automatically added, which contains the service worker. The code to install this service worker is automatically inserted in the src/app/app.module.ts file:

...
import  ServiceWorkerModule  from  '@angular/service-worker';

@NgModule(
declarations: [
AppComponent
],

imports: [
...
ServiceWorkerModule.register('/ngsw-worker.js',  enabled:  environment.production )
],

The @angular/service-worker is installed by the ng add command and added as a dependency to pwademo/package.json:

"dependencies": 
...
"@angular/service-worker": "^6.1.0"

The service worker build support is also enabled in the CLI. In the angular.json file a "serviceWorker": true configuration option is added.

In the index.html file a meta tag for theme-color with a value of #1976d2 is added (It also corresponds to the theme_color value in the manifest.json file):

<meta  name="theme-color"  content="#1976d2">

The theme color tells the browser what color to tint UI elements such as the address bar.

Adding the theme color to both the index.html and manifest.json files fixes the Address Bar Matches Brand Colors audit.

The Service Worker Configuration File

Another file src/ngsw-config.json is added to the project but It’s not a required file for PWAs. It’s a configuration file which allows you to specify which files and data URLs the Angular service worker should cache and how it should update the cached files and data. You can find all details about this file from the official docs.

Note: As of this writing, with the latest 6.1.3 previous ng add @angular/pwa command will fail with this error: Path “/ngsw-config.json” already exists so for now the solution is to downgrade @angular/cli and @angular/pwa to version 6.0.8.

Simply run the following commands in your project:

$ npm i @angular/cli@6.0.8
$ ng i @angular/pwa@6.0.8
$ ng add @angular/pwa

Now let’s re-run the audits against our local PWA hosted locally. This is the new PWA score:


Initial PWA Report


PWA Report. (Large preview)

The Angular CLI doesn’t automatically add the JavaScript fallback code we mentioned in the Progressive Enhancement section so open the src/index.html file and add it:

<noscript>
Please enable JavaScript to run this application.
</noscript>

Next, rebuild your application and re-run the audits. This is the result now:


Initial PWA Report


PWA Report. (Large preview)

We have only one failed audit which is related to HTTPS redirect. We need to host the application and configure HTTP to HTTPS redirect.

Let’s now run the audits against a hosted and secured version of our PWA.


PWA Final Report


PWA Final Report. (Large preview)

We get a score of 100100 which means we’ve successfully implemented all core tenets of PWAs.

You can get the final code of this demo PWA from this GitHub repository.

Conclusion

In this tutorial, we’ve built a simple Angular application and have turned it into a PWA using Angular CLI. We used Google’s Lighthouse to audit our application for PWA features and explained various core tenets of PWAs such as Service Workers for adding offline support and push notifications. The Web Manifest file for enabling add-to-home-screen and splash screen features, Progressive Enhancement as well as HTTPS .

You may also need to manually check for other items highlighted (under the “Additional items to manually check” section) but not automatically checked by Lighthouse. These checks are required by the baseline PWA Checklist by Google. They do not affect the PWA score but it’s important that you verify them manually. For example, you need to make sure your site works cross-browser and that each page has a URL which is important for the purpose of shareability on social media.

Since PWAs are also about other aspects such as better perceived performance and accessibility, you can also use Lighthouse for auditing your PWA (or any general website) for these aspects and improve it as needed.

Smashing Editorial
(rb, ra, yk, il)


View this article:  

Building A PWA Using Angular 6

Thumbnail

How to Calculate Your Landing Page Conversion Rate (And Increase It)

landing-page-conversion-rate-introduction

A landing page represents an opportunity. Your prospect or lead will either take advantage of it or they won’t. The landing page conversion rate tells you how well you’re doing. Some websites have one or two landing pages, while others have dozens. It all depends on how many products or services you sell and what referral source sends you the most traffic. For instance, if you get tons of traffic from Facebook and Instagram Ads, you might have a landing page for each of those referral sources. You’ll want to optimize each page to reflect the visual aesthetic and copy…

The post How to Calculate Your Landing Page Conversion Rate (And Increase It) appeared first on The Daily Egg.

Excerpt from:  

How to Calculate Your Landing Page Conversion Rate (And Increase It)

Thumbnail

Suffering From Analysis Paralysis? You Should See An Optimization Specialist

crazy egg analysis tips

Have you ever faced down a giant table or spreadsheet of data and thought, “I have no idea what to do with this”? As marketers we’ve all probably had those deer-in-the-headlights moments once or twice, where we’ve floundered to figure out what the hell we’re looking at. Crazy Egg was built on the premise of simplicity and ease of use, for those that I fondly like to call “Google Analytics-averse” – but there’s always room for improvement when it comes to helping folks switch from analysis to action mode. Whether you’re a UX designer, small business owner, SEO expert or…

The post Suffering From Analysis Paralysis? You Should See An Optimization Specialist appeared first on The Daily Egg.

Link: 

Suffering From Analysis Paralysis? You Should See An Optimization Specialist

Thumbnail

Preparing Your App For iOS 12 Notifications




Preparing Your App For iOS 12 Notifications

Kaya Thomas



In 2016, Apple announced a new extension that will allow developers to better customize their push and local notifications called the UNNotificationContentExtension. The extension gets triggered when a user long presses or 3D touches on a notification whenever it is delivered to the phone or from the lock/home screen. In the content extension, developers can use a view controller to structure the UI of their notification, but there was no user interaction enabled within the view controller — until now. With the release of iOS 12 and XCode 10, the view controller in the content extension now enables user interaction which means notifications will become even more powerful and customizable.

At WWDC 2018, Apple also announced several changes to notification settings and how they appear on the home screen. In an effort to make users more aware of how they are using apps and allowing more user control of their app usage, there is a new notification setting called “Deliver Quietly.” Users can set your app to Delivery Quietly from the Notification Center, which means they will not receive banners or sound notifications from your app, but they will appear in the Notification Center. Apple using an in-house algorithm, which presumably tracks often you interact with notifications, will also ask users if they still want to receive notifications from particular apps and encourage you to turn on Deliver Quietly or turn them off completely.

Notifications are getting a big refresh in iOS 12, and I’ve only scratched the surface. In the rest of this article, we’ll go over the rest of the new notification features coming to iOS 12 and how you can implement them in your own app.

Recommended reading: WWDC 2018 Diary Of An iOS Developer

Remote vs Local Notifications

There are two ways to send push notifications to a device: remotely or locally. To send notifications remotely, you need a server that can send JSON payloads to Apple’s Push Notification Service. Along with a payload, you also need to send the device token and any other authentication certificate or tokens that verify your server is allowed to send the push notification through Apple. For this article, we focus on local notifications which do not need a separate server. Local notifications are requested and sent through the UNUserNotificationCenter. We’ll go over later how specifically to make the request for a local notification.

In order to send a notification, you first need to get permission from the user on whether or not they want you to send them notifications. With the release of iOS 12, there are a lot of changes to notification settings and permissions so let’s break it down. To test out any of the code yourself, make sure you have the Xcode 10 beta installed.

Notification Settings And Permissions

Deliver Quietly

Delivery Quietly is Apple’s attempt to allow users more control over the noise they may receive from notifications. Instead of going into the settings app and looking for the app whose notification settings you want to change, you can now change the setting directly from the notification. This means that a lot more users may turn off notifications for your app or just delivery them quietly which means the app will get badged and notifications only show up in the Notification Center. If your app has its own custom notification settings, Apple is allowing you to link directly to that screen from the settings management view pictured below.


iPhone 8 Plus shown with Manage selected from notification which brings up the Deliver Quietly and Turn Off options.


Delivery quietly feature. (Large preview)

In order to link to your custom notification setting screen, you must set providesAppNotificationSettings as a UNAuthorizationOption when you are requesting notification permissions in the app delegate.

In didFinishLaunchingWithOptions, add the following code:

UNUserNotificationCenter.current().requestAuthorization(options: [.alert, .badge, .sound, .providesAppNotificationSettings])  ... 

When you do this, you’ll now see your custom notification settings in two places:

  • If the user selects Turn Off when they go to manage settings directly from the notification;
  • In the notification settings within the system’s Settings app.

iPhone 8 Plus shown with Turn Off selected from notification which brings up the Turn Off All Notifications and Configure in NotificationTester options.


Deep link to to custom notification settings for NotificationTester from notification in the Notification Center. (Large preview)


iPhone 8 Plus shown with system Settings app open with Notifications screen for NotificationTester app.


Deep link to custom notification settings for NotificationTester from system’s Settings app. (Large preview)

You also have to make sure to handle the callback for when the user selects on either way to get to your notification settings. Your app delegate or an extension of your app delegate has to conform to the protocol UNUserNotificationCenterDelegate so you can then implement the following callback method:

func userNotificationCenter(_ center: UNUserNotificationCenter, openSettingsFor notification: UNNotification?) 
    let navController = self.window?.rootViewController as! UINavigationController
    let notificationSettingsVC = NotificationSettingsViewController()
    navController.pushViewController(notificationSettingsVC, animated: true)

Another new UNAuthorizationOption is provisional authorization. If you don’t mind your notifications being delivered quietly, you can set add .provisional to your authorization options as shown below. This means that you don’t have to prompt the user to allow notifications — the notifications will still show up in the Notification Center.

UNUserNotificationCenter.current().requestAuthorization(options: [.alert, .badge, .provisional])  ... 

So now that you’ve determined how to request permission from the user to deliver notifications and how to navigate users to your own customized settings view, let’s go more into more detail about the actual notifications.

Sending Grouped Notifications

Before we get into the customization of the UI of a notification, let’s go over how to make the request for a local notification. First, you have to register any UNNotificationCategory, which are like templates for the notifications you want to send. Any notification set to a particular category will inherit any actions or options that were registered with that category. After you’ve requested permission to send notifications in didFinishLaunchingWithOptions, you can register your categories in the same method.

let hiddenPreviewsPlaceholder = "%u new podcast episodes available"
let summaryFormat = "%u more episodes of %@"
let podcastCategory = UNNotificationCategory(identifier: "podcast", actions: [], intentIdentifiers: [], hiddenPreviewsBodyPlaceholder: hiddenPreviewsPlaceholder, categorySummaryFormat: summaryFormat, options: [])
UNUserNotificationCenter.current().setNotificationCategories([podcastCategory])

In the above code, I start by initiating two variables:

  • hiddenPreviewsPlaceholder
    This placeholder is used in case the user has “Show Previews” off for your app; if we don’t have a placeholder there, your notification will show with only “Notification” also the text.
  • summaryFormat
    This string is new for iOS 12 and coincides with the new feature called “Group Notifications” that will help the Notification Center look a lot cleaner. All notifications will show up in stacks which will be either representing all notifications from the app or specific groups that the developer has set for there app.

The code below shows how we associate a notification with a group.

@objc func sendPodcastNotification(for podcastName: String) 
let content = UNMutableNotificationContent()
content.body = "Introducing Season 7"
content.title = "New episode of (podcastName):"
content.threadIdentifier = podcastName.lowercased()
content.summaryArgument = podcastName
content.categoryIdentifier = NotificationCategoryType.podcast.rawValue
sendNotification(with: content)

For now, I’ve hardcoded the text of the notification just for the example. The threadIdentifier is what creates the groups that we show as stacks in the Notification Center. In this example, I want the notifications grouped by podcast so each notification you get is separated by what podcast it’s associated with. The summaryArgument matches back to our categorySummaryFormat we set in the app delegate. In this case, we want the string for the format: "%u more episodes of %@" to be the podcast name. Lastly, we have to set the category identifier to ensure the notification has the template we set in the app delegate.

func sendNotification(for category: String, with content: UNNotificationContent) 
let uuid = UUID().uuidString
let trigger = UNTimeIntervalNotificationTrigger(timeInterval: 5, repeats: false)
let request = UNNotificationRequest(identifier: uuid, content: content, trigger: trigger)
UNUserNotificationCenter.current().add(request, withCompletionHandler: nil)

The above method is how we request the notification to be sent to the device. The identifier for the request is just a random unique string; the content is passed in and we create the content in our sendPodcastNotification method, and lastly, the trigger is when you want the notification to send. If you want the notification to send immediately, you can set that parameter to nil.


iPhone 8 Plus lock screen shown with a grouped notification stack from Notification Tester app.


Grouped notifications for NotificationTester. (Large preview)


iPhone 8 Plus lock screen shown with a grouped notification stack from Notification Tester app that has hidden content.


Notification grouped with previews turned off. (Large preview)

Using the methods we’ve described above, here’s the result on the simulator. I have a button that has the sendPodcastNotification method as a target. I tapped the button three times to have the notifications sent to the device. In the first photo, I have “Show Previews” set to “Always” so I see the podcast and the name of the new episodes along with the summary that shows I have two more new episodes to check out. When “Show Previews” is set to “Never,” the result is the second image above. The user won’t see which podcast it is to respect the “No Preview” setting, but they can still see that I have three new episodes to check out.

Notification Content Extension

Now that we understand how to set our notification categories and make the request for them to be sent, we can go over how to customize the look of the notification using the Notification Service and Notification Content extensions. The Notification Service extension allows you to edit the notification content and download any attachments in your notification like images, audio or video files. The Notification Content extension contains a view controller and storyboard that allows you to customize the look of your notification as well as handle any user interaction within the view controller or taps on notification actions.

To add these extensions to your app go File → New → Target.


Xcode shown after selecting from menu to add a new target, Notification Content Extension is highlighted.


Adding new target to app for the Notification Content Extension. (Large preview)

You can only add them one at a time, so name your extension and repeat the process to add the other. If a pop-up appears asking you to activate your new scheme, click the “Activate” button to set it up for debugging.

For the purpose of this tutorial, we will be focusing on the Notification Content Extension. For local notifications, we can include the attachments in the request, which we’ll go over later.

First, go to the Info.plist file in the Notification Content Extension target.


Info.plist file for Notification Content Extension shown in Xcode.


Info.plist for the Notification Content Extension. (Large preview)

The following attributes are required:

  • UNNotificationExtensionCategory
    A string value equal to the notification category which we created and set in the app delegate. This will let the content extension know which notification you want to have custom UI for.
  • UNNotificationExtensionInitialContentSizeRatio
    A number between 0 and 1 which determines the aspect ratio of your UI. The default value is 1 which will allow your interface to have its total height equal to its width.

I’ve also set UNNotificationExtensionDefaultContentHidden to “YES” so that the default notification does not show when the content extension is running.

You can use the storyboard to set up your view or create the UI programmatically in the view controller. For this example I’ve set up my storyboard with an image view which will show the podcast logo, two labels for the title and body of the notification content, and a “Like” button which will show a heart image.

Now, in order to get the image showing for the podcast logo and the button, we need to go back to our notification request:

guard let pathUrlForPodcastImg = Bundle.main.url(forResource: "startup", withExtension: "jpg") else  return 
let imgAttachment = try! UNNotificationAttachment(identifier: "image", url: pathUrlForPodcastImg, options: nil)

guard let pathUrlForButtonNormal = Bundle.main.url(forResource: "heart-outline", withExtension: "png") else  return 
let buttonNormalStateImgAtt = try! UNNotificationAttachment(identifier: "button-normal-image", url: pathUrlForButtonNormal, options: nil)

guard let pathUrlForButtonHighlighted = Bundle.main.url(forResource: "heart-filled", withExtension: "png") else  return 
let buttonHighlightStateImgAtt = try! UNNotificationAttachment(identifier: "button-highlight-image", url: pathUrlForButtonHighlighted, options: nil)

content.attachments = [imgAttachment, buttonNormalStateImgAtt, buttonHighlightStateImgAtt]

I added a folder in my project that contains all the images we need for the notification so we can access them through the main bundle.


Project navigator shown in Xcode.


Xcode project navigator. (Large preview)

For each image, we get the file path and use that to create a UNNotificationAttachment. Added that to our notification content allows us to access the images in the Notification Content Extension in the didReceive method shown below.

func didReceive(_ notification: UNNotification) {
self.newEpisodeLabel.text = notification.request.content.title
self.episodeNameLabel.text = notification.request.content.body

let imgAttachment = notification.request.content.attachments[0]
let buttonNormalStateAtt = notification.request.content.attachments[1]
let buttonHighlightStateAtt = notification.request.content.attachments[2]

guard let imageData = NSData(contentsOf: imgAttachment.url), let buttonNormalStateImgData = NSData(contentsOf: buttonNormalStateAtt.url), let buttonHighlightStateImgData = NSData(contentsOf: buttonHighlightStateAtt.url) else  return 

let image = UIImage(data: imageData as Data)
let buttonNormalStateImg = UIImage(data: buttonNormalStateImgData as Data)?.withRenderingMode(.alwaysOriginal)
let buttonHighlightStateImg = UIImage(data: buttonHighlightStateImgData as Data)?.withRenderingMode(.alwaysOriginal)

imageView.image = image
likeButton.setImage(buttonNormalStateImg, for: .normal)
likeButton.setImage(buttonHighlightStateImg, for: .selected)
}

Now we can use the file path URLs we set in the request to grab the data for the URL and turn them into images. Notice that I have two different images for the different button states which will allow us to update the UI for user interaction. When I run the app and send the request, here’s what the notification looks like:


iPhone 8 Plus shown with custom notification loaded after force touching the notification.


Content extension loaded for NotificationTester app. (Large preview)

Everything I’ve mentioned so far in relation to the content extension isn’t new in iOS 12, so let’s dig into the two new features: User Interaction and Dynamic Actions. When the content extension was first added in iOS 10, there was no ability to capture user touch within a notification, but now we can register UIControl events and respond when the user interacts with a UI element.

For this example, we want to show the user that the “Like” button has been selected or unselected. We already set the images for the .normal and .selected states, so now we just need to add a target for the UIButton so we can update the selected state.

override func viewDidLoad() 
super.viewDidLoad()
// Do any required interface initialization here.
likeButton.addTarget(self, action: #selector(likeButtonTapped(sender:)), for: .touchUpInside)
@objc func likeButtonTapped(sender: UIButton) 
likeButton.isSelected = !sender.isSelected

Now with the above code we get the following behavior:

Gif of iPhone 8 Plus with custom notification loaded and like button being selected and unselected.
Selecting like button within notification. (Large preview)

In the selector method likeButtonTapped, we could also add any logic for saving the liked state in User Defaults or the Keychain, so we have access to it in our main application.

Notification actions have existed since iOS 10, but once you click on them, usually the user will be rerouted to the main application or the content extension is dismissed. Now in iOS 12, we can update the list of notification actions that are shown in response to which action the user selects.

First, let’s go back to our app delegate where we create our notification categories so we can add some actions to our podcast category.

let playAction = UNNotificationAction(identifier: "play-action", title: "Play", options: [])
let queueAction = UNNotificationAction(identifier: "queue-action", title: "Queue Next", options: [])
let podcastCategory = UNNotificationCategory(identifier: "podcast", actions: [playAction, queueAction], intentIdentifiers: [], hiddenPreviewsBodyPlaceholder: hiddenPreviewsPlaceholder, categorySummaryFormat: summaryFormat, options: [])

Now when we run the app and send a notification, we see the following actions shown below:


iPhone 8 Plus with custom notification loaded with an options to Play or Add to Queue.


Notification quick actions. (Large preview)

When the user selects “Play,” we want the action to be updated to “Pause.” If they select “Queue Next,” we want that action to be updated to “Remove from Queue.” We can do this in our didReceive method in the Notification Content Extension’s view controller.

func didReceive(_ response: UNNotificationResponse, completionHandler completion:
(UNNotificationContentExtensionResponseOption) -> Void) {
guard let currentActions = extensionContext?.notificationActions else  return 

if response.actionIdentifier == "play-action" 
let pauseAction = UNNotificationAction(identifier: "pause-action", title: "Pause", options: [])
let otherAction = currentActions[1]
let newActions = [pauseAction, otherAction]
extensionContext?.notificationActions = newActions

 else if response.actionIdentifier == "queue-action" 
let removeAction = UNNotificationAction(identifier: "remove-action", title: "Remove from Queue", options: [])
let otherAction = currentActions[0]
let newActions = [otherAction, removeAction]
extensionContext?.notificationActions = newActions

  else if response.actionIdentifier == "pause-action" 
let playAction = UNNotificationAction(identifier: "play-action", title: "Play", options: [])
let otherAction = currentActions[1]
let newActions = [playAction, otherAction]
extensionContext?.notificationActions = newActions

 else if response.actionIdentifier == "remove-action" 
let queueAction = UNNotificationAction(identifier: "queue-action", title: "Queue Next", options: [])
let otherAction = currentActions[0]
let newActions = [otherAction, queueAction]
extensionContext?.notificationActions = newActions

completion(.doNotDismiss)
}

By resetting the extensionContext?.notificationActions list to contain the updated actions, it allows us to change the actions every time the user selects one. The behavior is shown in the gif below.

Gif of iPhone 8 Plus with custom notification loaded and the quick actions being changed from Play to Pause and Add to Queue to Remove from Queue.
Dynamic notification quick actions. (Large preview)

Summary

There’s a lot to do before iOS 12 launches to make sure your notifications are ready. The steps vary in complexity and you don’t have to implement them all. Make sure to first download XCode 10 beta so you can try out the features we’ve gone over. If you want to play around with the demo app I’ve referenced throughout the article, check it out on Github.

For Your Notification Permissions Request And Settings, You’ll Need To:

  • Determine whether or not you want to enable provisional authorization and add it to your authorization options.
  • If you have already have a customized notification settings view in your app, add providesAppNotificationSettings to your authorization options as well as implement the call back in your app delegate or whichever class conforms to UNUserNotificationCenterDelegate.

For Notification Grouping:

  • Add a thread identifier to your remote and local notifications so your notifications are correctly grouped in the Notification Center.
  • When registering your notification categories, add the category summary parameter if you want your grouped notification to be more descriptive than “more notifications.”
  • If you want to customize the summary text even more, then add a summary identifier to match whichever formatting you added for the category summary.

For Customized Rich Notifications:

  • Add the Notification Content extension target to your app to create rich notifications.
  • Design and implement the view controller to contain whichever elements you want in your notification.
  • Consider which interactive elements would be useful to you, i.e. buttons, table view, switches, etc.
  • Update the didReceive method in the view controller to respond to selected actions and update the list of actions if necessary.

Further Reading

Smashing Editorial
(ra, yk, il)


Follow this link: 

Preparing Your App For iOS 12 Notifications

Thumbnail

How to Optimize Your Website for SEO and Conversions

optimize-website-seo-conversions-introduction

Have you learned how to optimize your website for both SEO and conversions? If not, your website isn’t working as hard as it should. SEO and conversions might exist in separate parts of the marketing sector, but they’re inextricably linked. If you have good SEO, you can attract more traffic and get more opportunities to convert potential customers. A website optimized for conversions typically has better metrics, such as time on page and bounce rate, which means that Google might rank it higher. The following tips and strategies will teach you how to optimize your website for both SEO and…

The post How to Optimize Your Website for SEO and Conversions appeared first on The Daily Egg.

Read this article – 

How to Optimize Your Website for SEO and Conversions

Thumbnail

Building A Central Logging Service In-House




Building A Central Logging Service In-House

Akhil Labudubariki



We all know how important debugging is for improving application performance and features. BrowserStack runs one million sessions a day on a highly distributed application stack! Each involves several moving parts, as a client’s single session can span multiple components across several geographic regions.

Without the right framework and tools, the debugging process can be a nightmare. In our case, we needed a way to collect events happening during different stages of each process in order to get an in-depth understanding of everything taking place during a session. With our infrastructure, solving this problem became complicated as each component might have multiple events from their lifecycle of processing a request.

That’s why we developed our own in-house Central Logging Service tool (CLS) to record all important events logged during a session. These events help our developers identify conditions where something goes wrong in a session and helps keep track of certain key product metrics.

Debugging data ranges from simple things like API response latency to monitoring a user’s network health. In this article, we share our story of building our CLS tool which collects 70G of relevant chronological data per day from 100+ components reliably, at scale and with two M3.large EC2 instances.

The Decision To Build In-House

First, let’s consider why we built our CLS tool in-house rather than used an existing solution. Each of our sessions sends 15 events on average, from multiple components to the service – translating into approximately 15 million total events per day.

Our service needed the ability to store all this data. We sought a complete solution to support event storing, sending and querying across events. As we considered third-party solutions such as Amplitude and Keen, our evaluation metrics included cost, performance in handling high parallel requests and ease of adoption. Unfortunately, we could not find a fit that met all our requirements within budget – although benefits would have included saving time and minimizing alerts. While it would take additional effort, we decided to develop an in-house solution ourselves.


Building in-house


One of the biggest issues with building In-house is the amount of resources that we need to spend to maintain it. (Image credit: Source: Digiday)

Technical Details

In terms of architecting for our component, we outlined the following basic requirements:

  • Client Performance
    Does not impact the performance of the client/component sending the events.
  • Scale
    Able to handle a high number of requests in parallel.
  • Service performance
    Quick to process all events being sent to it.
  • Insight into data
    Each event logged needs to have some meta information to be able to uniquely identify the component or user, account or message and give more information to help the developer debug faster.
  • Queryable interface
    Developers can query all events for a particular session, helping to debug a particular session, build component health reports, or generate meaningful performance statistics of our systems.
  • Faster and easier adoption
    Easy integration with an existing or new component without burdening teams and taking up their resources.
  • Low maintenance
    We are a small engineering team, so we sought a solution to minimize alerts!

Building Our CLS Solution

Decision 1: Choosing An Interface To Expose

In developing CLS, we obviously didn’t want to lose any of our data, but we didn’t want component performance to take a hit either. Not to mention the additional factor of preventing existing components from becoming more complicated, since it would delay overall adoption and release. In determining our interface, we considered the following choices:

  1. Storing events in local Redis in each component, as a background processor pushes it to CLS. However, this requires a change in all components, along with an introduction of Redis for components which didn’t already contain it.
  2. A Publisher – Subscriber model, where Redis is closer to the CLS. As everyone publishes events, again we have the factor of components running across the globe. During the time of high-traffic, this would delay components. Further, this write could intermittently jump up to five seconds (due to the internet alone).
  3. Sending events over UDP, which offers a lesser impact on application performance. In this case data would be sent and forgotten, however, the disadvantage here would be data loss.

Interestingly, our data loss over UDP was less than 0.1 percent, which was an acceptable amount for us to consider building such a service. We were able to convince all teams that this amount of loss was worth the performance, and went ahead to leverage a UDP interface that listened to all events being sent.

While one result was a smaller impact on an application’s performance, we did face an issue as UDP traffic was not allowed from all networks, mostly from our users’ – causing us in some cases to receive no data at all. As a workaround, we supported logging events using HTTP requests. All events coming from the user’s side would be sent via HTTP, whereas all events being recorded from our components would be via UDP.

Decision 2: Tech Stack (Language, Framework & Storage)

We are a Ruby shop. However, we were uncertain if Ruby would be a better choice for our particular problem. Our service would have to handle a lot of incoming requests, as well as process a lot of writes. With the Global Interpreter lock, achieving multithreading or concurrency would be difficult in Ruby (please don’t take offense – we love Ruby!). So we needed a solution that would help us achieve this kind of concurrency.

We were also keen to evaluate a new language in our tech stack, and this project seemed perfect for experimenting with new things. That’s when we decided to give Golang a shot since it offered inbuilt support for concurrency and lightweight threads and go-routines. Each logged data point resembles a key-value pair where ‘key’ is the event and ‘value’ serves as its associated value.

But having a simple key and value is not enough to retrieve a session related data – there is more metadata to it. To address this, we decided any event needing to be logged would have a session ID along with its key and value. We also added extra fields like timestamp, user ID and the component logging the data, so that it became more easy to fetch and analyze data.

Now that we decided on our payload structure, we had to choose our datastore. We considered Elastic Search, but we also wanted to support update requests for keys. This would trigger the entire document to be re-indexed, which might affect the performance of our writes. MongoDB made more sense as a datastore since it would be easier to query all events based on any of the data fields that would be added. This was easy!

Decision 3: DB Size Is Huge And Query And Archiving Sucks!

In order to cut maintenance, our service would have to handle as many events as possible. Given the rate that BrowserStack releases features and products, we were certain the number of our events would increase at higher rates over time, meaning our service would have to continue to perform well. As space increases, reads and writes take more time – which could be a huge hit on the service’s performance.

The first solution we explored was moving logs from a certain period away from the database (in our case, we decided on 15 days). To do this, we created a different database for each day, allowing us to find logs older than a particular period without having to scan all written documents. Now we continually remove databases older than 15 days from Mongo, while of course keeping backups just in case.

The only leftover piece was a developer interface to query session-related data. Honestly, this was the easiest problem to solve. We provide an HTTP interface, where people can query for session related events in the corresponding database in the MongoDB, for any data having a particular session ID.

Architecture

Let’s talk about the internal components of the service, considering the following points:

  1. As previously discussed, we needed two interfaces – one listening over UDP and another listening over HTTP. So we built two servers, again one for each interface, to listen for events. As soon as an event arrives, we parse it to check whether it has the required fields – these are session ID, key, and value. If it does not, the data is dropped. Otherwise, the data is passed over a Go channel to another goroutine, whose sole responsibility is to write to the MongoDB.
  2. A possible concern here is writing to the MongoDB. If writes to the MongoDB are slower than the rate data is received, this creates a bottleneck. This, in turn, starves other incoming events and means dropped data. The server, therefore, should be fast in processing incoming logs and be ready to process ones upcoming. To address the issue, we split the server into two parts: the first receives all events and queues them up for the second, which processes and writes them into the MongoDB.
  3. For queuing we chose Redis. By dividing the entire component into these two pieces we reduced the server’s workload, giving it room to handle more logs.
  4. We wrote a small service using Sinatra server to handle all the work of querying MongoDB with given parameters. It returns an HTML/JSON response to developers when they need information on a particular session.

All these processes happily run on a single m3.large instance.


CLS v1


CLS v1: A representation of the system’s first architecture. All the components are running on one single machine.

Feature Requests

As our CLS tool saw more use over time, it needed more features. Below, we discuss these and how they were added.

Missing Metadata

Gradually as the number of components in BrowserStack increases, we’ve demanded more from CLS. For example, we needed the ability to log events from components lacking a session ID. Otherwise obtaining one would burden our infrastructure, in the form of affecting application performance and incurring traffic on our main servers.

We addressed this by enabling event logging using other keys, such as terminal and user IDs. Now whenever a session is created or updated, CLS is informed with the session ID, as well as the respective user and terminal IDs. It stores a map that can be retrieved by the process of writing to MongoDB. Whenever an event that contains either the user or terminal ID is retrieved, the session ID is added.

Handle Spamming (Code Issues In Other Components)

CLS also faced the usual difficulties with handling spam events. We often found deploys in components that generated a huge volume of requests sent to CLS. Other logs would suffer in the process, as the server became too busy to process these and important logs were dropped.

For the most part, most of the data being logged were via HTTP requests. To control them we enable rate limiting on nginx (using the limit_req_zone module), which blocks requests from any IP we found hitting requests more than a certain number in a small amount of time. Of course, we do leverage health reports on all blocked IPs and inform the responsible teams.

Scale v2

As our sessions per day increased, data being logged to CLS was also increasing. This affected the queries our developers were running daily, and soon the bottleneck we had was with the machine itself. Our setup consisted of two core machines running all of the above components, along with a bunch of scripts to query Mongo and keep track of key metrics for each product. Over time, data on the machine had increased heavily and scripts began to take a lot of CPU time. Even after trying to optimizing Mongo queries, we always came back to the same issues.

To solve this, we added another machine for running health report scripts and the interface to query these sessions. The process involved booting a new machine and setting up a slave of the Mongo running on the main machine. This has helped reduce the CPU spikes we saw every day caused by these scripts.


CLS v2


CLS v2: A representation of the current system’s architecture. Logs are written to the master machine and they are synced on the slave machine. Developer’s queries run on the slave machine.

Conclusion

Building a service for a task as simple as data logging can get complicated, as the amount of data increases. This article discusses the solutions we explored, along with challenges faced while solving this problem. We experimented with Golang to see how well it would fit with our ecosystem, and so far we have been satisfied. Our choice to create an internal service rather than paying for an external one has been wonderfully cost-efficient. We also didn’t have to scale our setup to another machine until much later – when the volume of our sessions increased. Of course, our choices in developing CLS were completely based on our requirements and priorities.

Today CLS handles up to 15 million events every day, constituting up to 70 GB of data. This data is being used to help us solve any issues our customers face during any session. We also use this data for other purposes. Given the insights each session’s data provides on different products and internal components, we’ve begun leveraging this data to keep track of each product. This is achieved by extracting the key metrics for all the important components.

All in all, we’ve seen great success in building our own CLS tool. If it makes sense for you, I recommend you consider doing the same!

Smashing Editorial
(rb, ra, il)


Jump to original:  

Building A Central Logging Service In-House

Thumbnail

FAST UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process




FAST UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process

Zoe Dimov



Today, UX research has earned wide recognition as an essential part of product and service design. However, UX professionals still seem to be facing two big problems when it comes to UX research: A lack of engagement from the team and stakeholders as well as the pressure to constantly reduce the time for research.

In this article, I’ll take a closer look at each of these challenges and propose a new approach known as ‘FAST UX’ in order to solve them. This is a simple but powerful tool that you can use to speed up UX research and turn stakeholders into active champions of the process.

Contrary to what you might think, speeding up the research process (in both the short and long term) requires effective collaboration, rather than you going away and soldiering on by yourself.

The acronym FAST (Focus, Attend, Summarise, Translate) wraps up a number of techniques and ideas that make the UX process more transparent, fun, and collaborative. I also describe a 5-day project with a central UK government department that shows you how the model can be put into practice.

The article is relevant for UX professionals and the people who work with them, including product owners, engineers, business analysts, scrum masters, marketing and sales professionals.

1. Lack Of Engagement Of The Team And Stakeholders

“Stakeholders have the capacity for being your worst nightmare and your best collaborator.”

UIE (2017)

As UX researchers, we need to ensure that “everyone in our team understands the end users with the same empathy, accuracy and depth as we do.” It has been shown that there is no better alternative to increasing empathy than involving stakeholders to actually experience the whole process themselves: from the design of the study (objectives, research questions), to recruitment, set up, fieldwork, analysis and the final presentation.

Anyone who has tried to do this knows that it can be extremely difficult to organize and get stakeholders to participate in research. There are two main reasons for this:

  1. Research is somebody else’s job.
    In my experience, UX professionals are often hired to “do the UX” for a company or organization. Even though the title of “Lead UX Researcher” sounds great and very important in my head, it often leads to misconceptions during kick-off meetings. Everyone automatically assumes that research is solely MY responsibility. It’s no wonder that stakeholders don’t want to get involved in the project. They assume research is my and nobody else’s job.
  2. UX process frameworks are incomplete.
    The problem is that even when stakeholders want to engage and participate in UX, they still do not know *how* they should get involved and *what* they should do. We spend a lot of time selling a UX process and research frameworks that are useful but ultimately incomplete — they do not explain how non-researchers can get involved in the research process.

Problems associated with stakeholders involvement in UX Research.


Fig. 1. Despite our enthusiasm as researchers, stakeholders often don’t understand how to get involved with the research process.

Further, a lot of stakeholders can find words such as ‘design,’ ‘analysis’ or ‘fieldwork’ intimidating or irrelevant to what they do. In fact, “UX is rife with jargon that can be off-putting to people from other fields.” In some situations, terms are familiar but mean something completely different, e.g., research in UX versus marketing research.

2. Pressure To Constantly Reduce The Time For Research

Another issue is that there is a constantly growing pressure to speed up the UX process and reduce the time spent on research. I cannot count the number of times when a project manager asked me to shorten a study even further by skipping the analysis stage or the kick-off sessions.

While previously you could spend weeks on research, a 5-day research cycle is increasingly becoming the norm. In fact, the book Sprint describes how research can dwindle to just a day (from an overall 5-day cycle).

Considering this, there is a LOT of pressure on UX researchers to deliver fast, without compromising the quality of the study. The difficulty increases when there are multiple stakeholders, each with their own opinions, demands, views, assumptions, and priorities.

The Fast UX Approach

Contrary to what you might think, reducing the time it takes to do UX research does not mean that you need to soldier on by yourself. I have done this and it only works in the short term. It does not matter how amazing the findings are — there is not enough PowerPoint slides in the world to convince a team of the urgency to take action if they have not been on the research journey themselves.

In the long term, the more actively engaged your team and stakeholders are in the research, the more empowered they will feel and the more willing they will be to take action. Productive collaboration also means that you can move together at a quicker pace and speed up the whole research process.

The FAST UX Research framework (see Fig. 2 below) is a tool to truly engage team members and stakeholders in a way that turns them into active advocates and champions of the research process. It shows non-researchers when and how they should get involved in UX Research.


The FAST UX Research approach; FAST UX Research methodology.


Fig. 2. The FAST User Experience Research framework

In essence, stakeholders take ownership of each of the UX research stages by carrying out the four activities, each corresponding to its research stage.

Working together reduces the time it takes for UX Research. The true benefit of the approach, however, is that, in the long term, it takes less and less time for the business to take action based on research findings as people become true advocates of user-centricity and the research process.

This approach can be applied to any qualitative research method and with any team. For example, you can carry out FAST usability testing, FAST interviews, FAST ethnography, and so on. In order to be effective, you will need to explain this approach to your stakeholders from the start. Talk them through the framework, explaining each stage. Emphasize that this is what EVERYONE does, that it’s their work as much as the UX researcher’s job, and that it’s only successful if everyone is involved throughout the process.

Stage 1: Focus (Define A Common Goal)

There is a uniform consensus within UX that a research project should start by defining its purpose: why is this research done and how will the results be acted upon?


Focus in FAST UX Research; first stage in the FAST UX Research process.


Fig. 3. Focus is about defining clear objectives and goals for the research and it’s ultimately the team’s and all stakeholders’ shared responsibility to do this.

Generally, this is expressed within the research goals, objectives, research questions and/or hypotheses. Most projects start with a kick-off meeting where those are either discussed (based on an available brief) or are defined during the meeting.

The most regular problem with kick-off sessions like these is that stakeholders come up with too many things they want to learn from a study. The way to turn the situation around is to assign a specific task to your immediate team (other UX professionals you work with) and stakeholders (key decision makers): they will help focus the study from the beginning.

The way they will do that is by working together through the following steps:

  1. Identify as a group the current challenges and problems.
    Ask someone to take notes on a shared document; alternatively, ask everyone to participate and write on sticky notes which are then displayed on a “project wall” for everyone to see.
  2. Identify the potential objectives and questions for a research study.
    Do this the same way you did the previous step. You don’t need to commit to anything yet.
  3. Prioritize.
    Ask the team to order the objectives and questions, starting with the most important ones.
  4. Reword and rephrase.
    Look at the top 3 questions and objectives. Are they too broad or narrow? Could they be reworded so it’s clearer what is the focus of the study? Are they feasible? Do you need to split or merge objectives and questions?
  5. Commit to be flexible.
    Agree on the top 1-2 objectives and ensure that you have agreement from everyone that this is what you will focus on.

Here are some questions you can ask to help your stakeholders and team to get to the focus of the study faster:

  • From the objectives we have recognized, what is most important?
  • What does success look like?
  • If we only learn one thing, which one would be the most important one?

Your role during the process is to provide expertise to determine if:

  • The identified objectives and questions are feasible for a single study;
  • Help with the wording of objectives and questions;
  • Design the study (including selecting a methodology) after the focus has been identified.

At first sight, the Focus and Attend (next stages) activities might be familiar as you are already carrying out a kick-off meeting and inviting stakeholders to attend research sessions.

However, adopting a FAST approach means that your stakeholders have as much ownership as you do during the research process because work is shared and co-owned. Reiterate that the process is collaborative and at the end of the session, emphasize that agreeing on clear research objectives is not easy. Remind everybody that having a shared focus is already better than what many teams start with.

Finally, remind the team and your stakeholders what they need to do during the rest of the process.

Stage 2: Attend (Immerse The Team Deeply In The Research Process)

Seeing first hand the experience of someone using a product or service is so rich that there is no substitute for it. This is why getting stakeholders to observe user research is still considered one of the best and most powerful ways to engage the team.


Attend in FAST UX Research; second stage in FAST UX Research.


Fig. 4. Attend in FAST UX Research is about encouraging the team and stakeholders to be present at all research sessions, but also to be actively engaged with the research.

What often happens is that observers join in on the day of the research study and then they spend the time plastered to their laptops and mobile phones. What is worse, some stakeholders often talk to the note-taker and distract the rest of the design team who need to observe the sessions.

This is why it is just as important that you get the team to interact with the research. The following activities allow the team to immerse themselves in the research session. You can ask stakeholders to:

  • Ask questions during the session through a dedicated live chat (e.g. Slack, Google Hangouts, Skype);
  • Take notes on sticky notes;
  • Summarize observations for everyone (see next stage).

Assign one person per session for each of these activities. Have one “live chat manager,” one “note-taker,” and one “observer” who will sum up the session afterwards.

Rotate people for the next session.

Before the session, it’s useful to walk observers through the ‘ground rules’ very briefly. You can have a poster similar to the one GDS developed that will help you do this and remind the team of their role during the study (see Fig. 3 above).


An observation poster; user research poster.


Fig. 5. A poster can be hanged in the observation room and used to remind the team and stakeholders what their responsibilities are and the ground rules during observation.

Farrell (2017) provides more detail on effective ways for stakeholders to take notes together. When you have multiple stakeholders and it’s not feasible for them to physically attend a field visit (e.g. on the street, in an office, at the home of the participant), you could stream the session to an observation room.

Stage 3: Summarize (Analysis For Non-Researchers)

I am a strong supporter of the idea that analysis starts the moment fieldwork begins. During the very first research session, you start looking for patterns and interpretation of what the data you have means.


Summarize in FAST UX Research; the third stage in FAST UX Research.


Fig. 6. Summarize in FAST UX Research is about asking the team and your stakeholders to tell you about what they thought were the most interesting aspects of user research.

Even after the first session (but typically towards the end of fieldwork) you can carry out collaborative analysis: a fun and productive way that ensures that you have everyone participating in one of the most important stages of research.

The collaborative analysis session is an activity where you provide an opportunity for everyone to be heard and create a shared understanding of the research.

Since you’re including other experts’ perspectives, you’re increasing the chances to identify more objective and relevant insights, and also for stakeholders to act upon the results of the study.

Even though ‘analysis’ is an essential part of any research project, a lot of stakeholders get scared by the word. The activity sounds very academic and complex. This is why at the end of each research session, research day, or the study as a whole, the role of your stakeholders and immediate team is to summarize their observations. Summarizing may sound superfluous but is an important part of the analysis stage; this is essentially what we do during “Downloading” sessions.

Listening to someone’s summary provides you with an opportunity to understand:

  • What they paid attention to;
  • What is important for them;
  • Their interpretation of the event.

Summary At The End Of Each Session

You do this by reminding everyone at the beginning of the session that at the end you will enter the room and ask them to summarize their observations and recommendations.

You then end the session by asking each stakeholder the following:

  • What were their key observations (see also Fig. 3)?
    • What happened during the session?
    • Were there any major difficulties for the participant?
    • What were the things that worked well?
  • Was there anything that surprised them?

This will make the team more attentive during the session, as they know that they will need to sum it up at the end. It will also help them to internalize the observations (and later, transition more easily to findings).

This is also the time to consistently share with your team what you think stands out from the study so far. Avoid the temptation to do a ‘big reveal’ at the end. It’s better if outcomes are told to stakeholders many times.

On multiple occasions, research has given me great outcomes. Instead of sharing them regularly, I keep them to myself until the final report. It doesn’t work well. A big reveal at the end leads to bewildered stakeholders who often cannot jump from observations to insights as quickly. As a result, there is either stubborn pushback or indifferent shrugs.

Summary At The End Of The Day

A summary of the event or the day can then naturally transition into a collaborative analysis session. Your job is to moderate the session.

The job of your stakeholders is to summarize the events of the day and the final results. Ask a volunteer to talk the group through what happened during the day. Other stakeholders can then add to these observations.

Summary At The End Of The Study

After the analysis is done, ask one or two stakeholders to summarize the study. Make sure they cover why we did research, what happened during the study and what are the primary findings. They can also do this by walking through the project wall (if you have one).

It’s very difficult not to talk about your research and leave someone else to do it. But it’s worth it. No matter how much you’re itching to do this yourself — don’t! It’s a great opportunity for people to internalize research and become comfortable with the process. This is one of the key moments to turn stakeholders into active advocates of user research.

At the end of this stage, you should have 5-7 findings that capture the study.

Stage 4: Translate (Make Stakeholders Active Champions Of The Solution)

“Research doesn’t have a value unless it results in decisions and actions.”

—Lang and Howell (2017).

Even when you agree with the findings, stakeholders might still disagree about what the research means or lack commitment to take further action. This is why after summarizing, ask your stakeholders to work with you and identify the “Now what?” or what it all means for the organization, product, service, team and/or individually for each one of them.


Translate in FAST UX Research; the fourth stage in FAST UX Research.


Fig. 7. Translate in FAST UX Research is about asking the team or individual stakeholders to discuss each of the findings and articulate how it will impact the business, the service, and product or their work.

Traditionally, it was the UX researchers’ job to write clear, precise, descriptive findings, and actionable recommendations. However, if the team and stakeholders are not part of identifying actionable recommendations, they might be resistant towards change in future.

To prevent later pushback, ask stakeholders to identify the “Now what?” (also referred to as ‘actionable recommendations’). Together, you’ll be able to identify how the insights and findings will:

  • Affect the business and what needs to be done now;
  • Affect the product/service and what changes do we need to make;
  • Affect people individually and the actions they need to take;
  • Lead to potential problems and challenges and their solutions;
  • Help solve problems or identify potential solutions.

Stakeholders and the team can translate the findings at the end of a collaborative analysis session.

If you decide to separate the activities and conduct a meeting in which the only focus is on actionable recommendations, then consider the following format:

  1. Briefly talk through the 5-7 main findings from the study (as a refresher if this stage is done separately from the analysis session or with other stakeholders).
  2. Split the group into teams and ask them to work on one finding/problem at a time.
  3. Ask them to list as many ways they see the finding affecting them.
  4. Ask one person from each group to present the findings back to the team.
  5. Ask one/two final stakeholders to summarize the whole study, together with the methods, findings, and recommendations.

Later, you can have multiple similar workshops; this is how you get to engage different departments from the organization.

Fast UX In Practice

An excellent example of a FAST UX Research approach in practice is a project I was hired to carry out for a central UK government department. The ultimate goal of the project was to identify user requirements for a very complex internal system.

At first sight, this was a very challenging project because:

  • There was no time to get to know the department or the client.
    Usually, I would have at least a week or two to get to know the client, their needs, opinions, internal pressures, and challenges. For this project, I had to start work on Monday with a team I had never met; in a building I had never worked, in a domain I knew little about, and finish on Friday the same week.
  • The system was very complex and required intense research.
    The internal system and the nature of work were very complex; this required gathering data with at least a few research methods (for triangulation).
  • This was the first time the team had worked with a UX Researcher.
    The stakeholders were primarily IT specialists. However, I was lucky that they were very keen and enthusiastic to be involved in the project and get their hands dirty.
  • Stakeholder availability.
    As is the case on many other projects, all stakeholders were extremely busy as they had their own work on top of the project. Nonetheless, we made it work, even if it meant meeting over lunch, or for a 15-minute wrap up before we went home.
  • There were internal pressures and challenges.
    As with any department and huge organization, there were a number of internal pressures and challenges. Some of them I expected (e.g. legacy systems, slow pace of change) but some I had no clue about when I started.
  • We had to coordinate work with external teams.
    An additional challenge was the need to work with and coordinate efforts with external teams at another UK department.

Despite all of these challenges, this was one of the most enjoyable projects I have worked on because of the tight collaboration initiated by the FAST approach.

The project consisted of:

  • 1 day of kick-off sessions and getting to know the team
  • 2,5 days of contextual inquiries and shadowing of internal team members,
  • Half a day for a co-creation workshop, and
  • 1 day for analysis and results reporting.

In the process, I gathered data from 20+ employees, had 16+ hours of observations, 300+ photos and about 100 pages of notes. Here is a great example of cramming in 3 weeks’ worth of work into a mere 5-day research cycle. More importantly, people in the department were really excited about the process.

Here is how we did it using a FAST UX Research approach:

  • Focus
    At the beginning of the project, the two key stakeholders identified what the focus of research would be while my role was mainly to help with prioritizing the objectives, tweak the research questions and check for feasibility. In this sense, I listened and mainly asked questions, interjecting occasionally with examples from previous projects or options that helped to adjust our approach.

    While I wrote the main discussion guide for the contextual inquiries and shadowing sessions, we sat together with the primary team to discuss and design the co-creation workshop with internal users of the system.

  • Attend
    During the workshop one of the stakeholders moderated half of the session, while the other took notes and observed closely the participants. It was a huge success internally, as stakeholders felt there was better visibility for their efforts to modernize the department, while employees felt listened to and involved in the research.
  • Summarize
    Immediately after the workshop, we sat together with the stakeholders for a 30-minute meeting where I had them summarize their observations.

    As a result of the shadowing, contextual inquiries and co-creation workshop, we were able to identify 60+ issues and problems with the internal system (with regards to integration, functionality, and usability), all captured in six high-level findings.

  • Translate
    Later, we discussed with the team how each of the six major findings translated to a change or implication for the department, the internal system, as well as collaboration with other departments.

We were so perfectly aligned with the team that when we had to talk about our work in front of another UK government department, I could let the stakeholders talk about the process and our progress.

My final task (over two additional days) was to document all of the findings in a research report. This was necessary as a knowledge repository because I had to move onto other projects.

With a more traditional approach, the project could have easily spanned 3 weeks. More importantly, quickly understanding individual and team pressures and challenges were the keys to the success of the new system. This could not have happened within the allocated time without a collaborative approach.

A FAST UX approach resulted in tight collaboration, strong co-ownership and a shared sense of progress; all of those allowed to shorten the time of the project, but also to instill a feeling of excitement about the UX research process.

Have You Tried It Out Already?

While UX research becomes ever more popular, gone are the days when we could soldier on by ourselves and only consult stakeholders at the end.

Mastering our craft as UX researchers means engaging others within the process and being articulate, clear, and transparent about our work. The FAST approach is a simple model that shows how to engage non-researchers with the research process. Reducing the time it takes to do research, both in the short (i.e. the study itself) and long term (i.e. using the research results), is a strategic advantage for the researcher, team, and the business as a whole.

Would you like to improve your efficiency and turn stakeholders into user research advocates? Go and try it out. You can then share your stories and advice here.

I would love to hear your comments, suggestion and any feedback you care to share! If you have tried it out already, do you have success stories you want to share? Be as open as you can — what worked well, and what didn’t? As with all other things UX, it’s most fun if we learn together as a team.

Smashing Editorial
(cc, ra, il)


Visit source: 

FAST UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process

Thumbnail

How To Improve Your Design Process With Data-Based Personas




How To Improve Your Design Process With Data-Based Personas

Tim Noetzel



Most design and product teams have some type of persona document. Theoretically, personas help us better understand our users and meet their needs. The idea is that codifying what we’ve learned about distinct groups of users helps us make better design decisions. Referring to these documents ourselves and sharing them with non-design team members and external stakeholders should ultimately lead to a user experience more closely aligned with what real users actually need.

In reality, personas rarely prove equal to these expectations. On many teams, persona documents sit abandoned on hard drives, collecting digital dust while designers continue to create products based primarily on whim and intuition.

In contrast, well-researched personas serve as a proxy for the user. They help us check our work and ensure that we’re building things users really need.

In fact, the best personas don’t just describe users; they actually help designers predict their behavior. In her article on persona creation, Laura Klein describes it perfectly:

“If you can create a predictive persona, it means you truly know not just what your users are like, but the exact factors that make it likely that a person will become and remain a happy customer.”

In other words, useful personas actually help design teams make better decisions because they can predict with some accuracy how users will respond to potential product changes.

Obviously, for personas to facilitate these types of predictions, they need to be based on more than intuition and anecdotes. They need to be data-driven.

So, what do data-driven personas look like, and how do you make one?

Start With What You Think You Know

The first step in creating data-driven personas is similar to the typical persona creation process. Write down your team’s hypotheses about what the key user groups are and what’s important to each group.

If your team is like most, some members will disagree with others about which groups are important, the particular makeup and qualities of each persona, and so on. This type of disagreement is healthy, but unlike the usual persona creation process you may be used to, you’re not going to get bogged down here.

Instead of debating the merits of each persona (and the various facets and permutations thereof), the important thing is to be specific about the different hypotheses you and your team have and write them down. You’re going to validate these hypotheses later, so it’s okay if your team disagrees at this stage. You may choose to focus on a few particular personas, but make sure to keep a backlog of other ideas as well.

Hypothetical Personas


First, start by recording all the hypotheses you have about key personas. You’ll refine these through user research in the next step. (Large preview)

I recommend aiming for a short, 1–2 sentence description of each hypothetical persona that details who they are, what root problem they hope to solve by using your product, and any other pertinent details.

You can use the traditional user stories framework for this. If you were creating hypothetical personas for Craigslist, one of these statements might read:

“As a recent college grad, I want to find cheap furniture so I can furnish my new apartment.”

Another might say:

“As a homeowner with an extra bedroom, I want to find a responsible tenant to rent this space to so I can earn some extra income.”

If you have existing data — things like user feedback emails, NPS scores, user interview notes, or analytics data — be sure to go over them and include relevant data points in your notes along with your user stories.

Validate And Refine

The next step is to validate and refine these hypotheses with user interviews. For each of your hypothetical personas, you’ll want to start by interviewing 5 to 10 people who fit that group.

You have three key goals for these interviews. For each group, you need to:

  1. Understand the context in which they need to solve the problem.
  2. Confirm that members of the persona group agree that the problem you recorded is an urgent and painful one that they struggle to solve now.
  3. Identify key predictors of whether a member of this persona is likely to become and remain an active user.

The approach you take to these interviews may vary, but I recommend a hybrid approach between a traditional user interview, which is very non-leading, and a Lean Problem interview, which is deliberately leading.

Start with the traditional user interview approach and ask behavior-based, non-leading questions. In the Craigslist example, we might ask the recent college grad something like:

“Tell me about the last time you purchased furniture. What did you buy? Where did you buy it?”

These types of questions are great for establishing whether the interviewee recently experienced the problem in question, how they solved it, and whether they’re dissatisfied with their current solution.

Once you’ve finished asking these types of questions, move on to the Lean Problem portion of the interview. In this section, you want to tell a story about a time when you experienced the problem — establishing the various issues you struggled with and why it was frustrating — and see how they respond.

You might say something like this:

“When I graduated college, I had to get new furniture because I wasn’t living in the dorm anymore. I spent forever looking at furniture stores, but they were all either ridiculously expensive or big-box stores with super-cheap furniture I knew would break in a few weeks. I really wanted to find good furniture at a reasonable price, but I couldn’t find anything and I eventually just bought the cheap stuff. It inevitably broke, and I had to spend even more money, which I couldn’t really afford. Does any of that resonate with you?”

What you’re looking for here is emphatic agreement. If your interviewee says “yes, that resonates” but doesn’t get much more excited than they were during the rest of the interview, the problem probably wasn’t that painful for them.

Data-Driven Personas Interview Format


You can validate or invalidate your persona hypotheses with a series of quick, 30-minute interviews. (Large preview)

On the other hand, if they get excited, empathize with your story, or give their own anecdote about the problem, you know you’ve found a problem they really care about and need to be solved.

Finally, make sure to ask any demographic questions you didn’t cover earlier, especially those around key attributes you think might be significant predictors of whether somebody will become and remain a user. For example, you might think that recent college grads who get high-paying jobs aren’t likely to become users because they can afford to buy furniture at retail; if so, be sure to ask about income.

You’re looking for predictable patterns. If you bring in 5 members of your persona and 4 of them have the problem you’re trying to solve and desperately want a solution, you’ve probably identified a key persona.

On the other hand, if you’re getting inconsistent results, you likely need to refine your hypothetical persona and repeat this process, using what you learn in your interviews to form new hypotheses to test. If you can’t consistently find users who have the problem you want to solve, it’s going to be nearly impossible to get millions of them to use your product. So don’t skimp on this step.

Create Your Personas

The penultimate step in this process is creating the actual personas themselves. This is where things get interesting. Unlike traditional personas, which are typically static, your data-driven personas will be living, breathing documents.

The goal here is to combine the lessons you learned in the previous step — about who the user is and what they need — with data that shows how well the latest iteration of your product is serving their needs.

At my company Swish, each one of our personas includes two sections with the following data:

Predictive User Data Product Performance Data
Description of the user including predictive demographics. The percentage of our current user base the persona represents.
Quotes from at least 3 actual users that describe the jobs-to-be-done. Latest activation, retention, and referral rates for the persona.
The percentage of the potential user base the persona represents. Current NPS Score for the persona.

If you’re looking for more ideas for data to include, check out Coryndon Luxmoore’s presentation on how his team created data-driven personas at Buildium.

It may take some time for your team to produce all this information, but it’s okay to start with what you have and improve the personas over time. Your personas won’t be sitting on a shelf. Every time you release a new feature or change an existing one, you should measure the results and update your personas accordingly.

Integrate Your Personas Into Your Workflow

Now that you’ve created your personas, it’s time to actually use them in your day-to-day design process. Here are 4 opportunities to use your new data-driven personas:

  1. At Standups
    At Swish, our standups are a bit different. We start these meetings by reviewing the activation, retention, and referral metrics for each persona. This ensures that — as we discuss yesterday’s progress and today’s obstacles — we remain focused on what really matters: how well we’re serving our users.
  2. During Prioritization
    Your data-driven personas are a great way to keep team members honest as you discuss new features and changes. When you know how much of your user base the persona represents and how well you’re serving them, it quickly becomes obvious whether a potential feature could actually make a difference. Suddenly deciding what to work on won’t require hours of debate or horse-trading.
  3. At Design Reviews
    Your data-driven personas are a great way to keep team members honest as you discuss new designs. When team members can creditably represent users with actual quotes from user interviews, their feedback quickly becomes less subjective and more useful.
  4. When Onboarding New Team Members
    New hires inevitably bring a host of implicit biases and assumptions about the user with them when they start. By including your data-driven personas in their onboarding documents, you can get new team members up to speed much more quickly and ensure they understand the hard-earned lessons your team learned along the way.

Keeping Your Personas Up To Date

It’s vitally important to keep your personas up-to-date so your team members can continue to rely on them to guide their design thinking.

As your product improves, it’s simple to update NPS scores and performance data. I recommend doing this monthly at a minimum; if you’re working on an early-stage, rapidly-changing product, you’ll get better mileage by updating these stats weekly instead.

It’s also important to check in with members of your personas periodically to make sure your predictive data stays relevant. As your product evolves and the competitive landscape changes, your users’ views about their problems will change as well. If your growth starts to plateau, another round of interviews may help to unlock insights you didn’t find the first time. Even if everything is going well, try to check in with members of your personas — both current users of your product and some non-users — every 6 to 12 months.

Wrapping Up

Building data-driven personas is a challenging project that takes time and dedication. You won’t uncover the insights you need or build the conviction necessary to unify your team with a week-long throwaway project.

But if you put in the time and effort necessary, the results will speak for themselves. Having the type of clarity that data-driven personas provide makes it far easier to iterate quickly, improve your user experience, and build a product your users love.

Further Reading

If you’re interested in learning more, I highly recommend checking out the linked articles above, as well as the following resources:

Smashing Editorial
(rb, ra, yk, il)


Original source: 

How To Improve Your Design Process With Data-Based Personas

Thumbnail

Monthly Web Development Update 3/2018: Service Workers, Building A CDN, And Cheating At Design

Service Worker is probably one of the most misrepresented technologies we currently have. When I hear people talking about it, the topic almost always revolves around serving an app when a user is offline. However, Service Worker can do so much more than that, and every week I come across new articles that show how powerful the technology really is.

This month, for example, we can learn how to use Service Worker for cross-tab messaging and to load off requests into the background with the Background Sync API. I think the toolset we now have in our browsers already allows us to build great experiences regardless of the network state. Now it’s up to us to make the experiences so great that users truly love them. And that’s probably the hardest part.

News

Sketch 49
Sketch 49 has arrived, and with it comes the new Prototyping in Sketch feature which lets you see the entire flow in action. (Image credit)

General

  • Ed Ellson examined Chrome’s Background Sync API and the retry strategy it uses to perform a request. By allowing synchronization in the background after a first attempt has failed, the API helps us improve the browsing experience for users who go offline or are on unstable connections.

UI/UX

Cheating At Design
Use color and weight to create visual hierarchy instead of size is only one of the seven practical tips for cheating at design that Adam Wathan and Steve Schoger share. (Image credit)

Security

  • With GraphQL you can query exactly what you want whenever you want. This is amazing for working with an API but also has complex security implications. Instead of asking for legitimate, useful data, a malicious actor could submit an expensive, nested query to overload your server, database, network, or all of these. To prevent this from happening, Max Stoiber shows us how we can secure the GraphQL API in our projects.

Privacy

  • WebKit is introducing the Storage Access API. The new API targets one of the major issues with Safari’s Intelligent Tracking Protection (ITP): Identifying users who are logged in to a first-party service but view content of it embedded on a third party (YouTube videos on a blog, for example). The Storage Access API allows third-party embeds to request access to their first-party cookies when the user interacts with them. A good solution to protect user privacy by default and allow exceptions on request.

Web Performance

  • Janos Pasztor built his own Content Delivery Network because he thinks it can be a better solution than using existing third parties. The code for the CDN of his personal website is now available on Github. A nice web performance article that looks at common solutions from a different angle.
  • A year after Facebook’s announcement to broadly use Cache-Control: Immutable, Paul Calvano examined how widespread its usage is on the web — apart from the few big players. Interesting research and it’s still sad to see that this useful performance tool is used so little. At Colloq, we use it quite a lot, which saves us a lot of traffic and load on our servers and enables us to serve a lot of pages nearly instantly to recurring users.
Global stats of a self-built CDN
Global stats for the custom CDN that Janos Pasztor built. (Image credit)

HTML & SVG

JavaScript

CSS

Accessibility

Accessibility Checklist
The Accessibility Checklist helps build accessibility into your process no matter your role or stage in a project. (Image credit)

Work & Life

  • This week I read an article by Alex Duloz, and his words still stick with me: “When we develop a new application, when we post content on the Internet, whatever we do that people will have access to, we should consider just for a minute if our contribution adds up to the level of dumbness kids/teenagers are exposed to. If it does, we should refrain from going live.” The truth is, most of us, including me, don’t consider this before posting on the Internet. We create funny things, share funny pictures and try to get fame with silly posts. But in reality, we shape society with this. Let’s try to provide more useful resources and make the consumption of this more enjoyable so young people can profit from our knowledge and not only view things we think are funny. “We should always consider how teenagers will use what we release.”
  • The MIT OpenCourseWare released a lot of free audio and video lectures. This is amazing news and makes great content available to broader masses.
  • Jake Knapp says great work requires idealism and cynicism and has strong arguments to back up this theory. An article worth reading.
  • There’s an important article on how unhappiness has grown in America’s population since around the year 2000. It reveals that while income inequality might play a role, the more important aspect is that young people who use a lot of digital media are unhappier than those who use it only up to an hour a day. Interestingly, people who don’t use digital media at all, are unhappy, too, so the outcome of this could be that we should try to use digital media only moderately — at least in our private lives. I bet it’ll make a big difference.
  • Following the theory of Michael Bradley, projects don’t necessarily need a roadmap for success. Instead, he suggests to create a moral compass that points out why the project exists and what its purpose is.

Going Beyond…

We hope you enjoyed this Web Development Update. The next one is scheduled for April 13th. Stay tuned.

Excerpt from – 

Monthly Web Development Update 3/2018: Service Workers, Building A CDN, And Cheating At Design

How To Make A WordPress Plugin Extensible

Have you ever used a plugin and wished it did something a bit differently? Perhaps you needed something unique that was beyond the scope of the settings page of the plugin.

I have personally encountered this, and I’m betting you have, too. If you’re a WordPress plugin developer, most likely some of your users have also encountered this while using your plugin.

Here’s a typical scenario: You’ve finally found that plugin that does everything you need — except for one tiny important thing. There is no setting or option to enable that tiny thing, so you browse the documentation and find that you can’t do anything about it. You request the feature in the WordPress plugin’s support forum — but no dice. In the end, you uninstall it and continue your search.

Imagine if you were the developer of this plugin. What would you do if a user asked for some particular functionality?

The ideal thing would be to implement it. But if the feature was for a very special use case, then adding it would be impractical. It wouldn’t be good to have a plugin setting that only 0.1% of your users would have a use for.

You’d only want to implement features that affect the majority of your users. In reality, 80% of users use 20% of the features (the 80/20 rule). So, make sure that any new feature is highly requested, and that 80% of your users would benefit from it, before implementing it. If you created a setting for every feature that is requested, then your plugin would become complicated and bloated — and nobody wants that.

Your best bet is to make the plugin extensible, code-wise, so that other people can enhance or modify it for their own needs.

In this article, you’ll learn about why making your plugin extensible is a good idea. I’ll also share a few tips of how I’ve learned to do this.

What Makes A Plugin Extensible?

In a nutshell, an extensible plugin means that it adheres to the “O” part of the SOLID principles of object-oriented programming — namely, the open/closed principle.

If you’re unfamiliar with the open/closed principle, it basically means that other people shouldn’t have to edit your code in order to modify something.

Applying this principle to a WordPress plugin, it would mean that a plugin is extensible if it has provisions in it that enable other people to modify its behavior. It’s just like how WordPress allows people to “hook” into different areas of WordPress, but at the level of the plugin.

A Typical Example Of A Plugin

Let’s see how we can create an extensible plugin, starting with a sample plugin that isn’t.

Suppose we have a plugin that generates a sidebar widget that displays the titles of the three latest posts. At the heart of the plugin is a function that simply wraps the titles of those three posts in list tags:


function get_some_post_titles() 
  $args = array(
      'posts_per_page' => 3,
  );

  $posts = get_posts( $args );

  $output = '
    '; foreach ( $posts as $post ) $output .= '
  • ' . $post->post_title . '
  • '; $output .= '
'; return $output; }

While this code works and gets the job done, it isn’t quite extensible.

Why? Because the function is set in its own ways, there’s no way to change its behavior without modifying the code directly.

What if a user wanted to display more than three posts, or perhaps include links with the posts’ titles? There’s no way to do that with the code above. The user is stuck with how the plugin works and can nothing to change it.

Including A Hundred Settings Isn’t The Answer

There are a number of ways to enhance the plugin above to allow users to customize it.

One such way would be to add a lot of options in the settings, but even that might not satisfy all of the possibilities users would want from the plugin.

What if the user wanted to do any of the following (scenarios we’ll revisit later):

  • display WooCommerce products or posts from a particular category;
  • display the items in a carousel provided by another plugin, instead of as a simple list;
  • perform a custom database query, and then use those query’s posts in the list.

If we added a hundred settings to our widget, then we would be able to cover the use cases above. But what if one of these scenarios changes, and now the user wants to display only WooCommerce products that are currently in stock? The widget would need even more settings to accommodate this. Pretty soon, we’d have a gazillion settings.

Also, a plugin with a huge list of settings isn’t exactly user-friendly. Steer away from this route if possible.

So, how would we go about solving this problem? We’d make the plugin extensible.

Adding Our Own Hooks To Make It Extensible

By studying the plugin’s code above, we see a few operations that the main function performs:

  • It gets posts using get_posts.
  • It generates a list of post titles.
  • It returns the generated list.

If other people were to modify this plugin’s behavior, their work would mostly likely involve these three operations. To make our plugin extensible, we would have to add hooks around these to open them up for other developers.

In general, these are good areas to add hooks to a plugin:

  • around and within the major processes,
  • when building output HTML,
  • for altering post or database queries,
  • before returning values from a function.

A Typical Example Of An Extensible Plugin

Taking these rules of thumb, we can add the following filters to make our plugin extensible:

  • add myplugin_get_posts_args for modifying the arguments of get_posts,
  • add myplugin_get_posts for overriding the results of get_posts,
  • add myplugin_list_item for customizing the generation of a list entry,
  • add myplugin_get_some_post_titles for overriding the returned generated list.

Here’s the code again with all of the hooks added in:


function get_some_post_titles() 
  $args = array(
      'posts_per_page' => 3,
  );

  // Let other people modify the arguments.
  $posts = get_posts( apply_filters( 'myplugin_get_posts_args', $args ) );

  // Let other people modify the post array, which will be used for display.
  $posts = apply_filters( 'myplugin_get_posts', $posts, $args );

  $output = '
    '; foreach ( $posts as $post ) // Let other people modify the list entry. $output .= '
  • ' . apply_filters( 'myplugin_list_item', $post->post_title, $post ) . '
  • '; $output .= '
'; // Let other people modify our output list. return apply_filters( 'myplugin_get_some_post_titles', $output, $args ); }

You can also get the code above in the GitHub archive.

I’m adding a lot of hooks here, which might seem impractical because the sample code is quite simple and small, but it illustrates my point: By adding just four hooks, other developers can now customize the plugin’s behavior in all sorts of ways.

Namespacing And Context For Hooks

Before proceeding, note two important things about the hooks we’ve implemented:

  • We’re namespacing the hooks with myplugin_.
    This ensures that the hook’s name doesn’t conflict with some other plugin’s hook. This is just good practice, because if another hook with the same name is called, it could lead to unwanted effects.
  • We’re also passing a reference to $args in all of the hooks for context.
    I do this so that if others use this filter to change something in the flow of the code, they can use that $args parameter as a reference to get an idea of why the hook was called, so that they can perform their adjustments accordingly.

The Effects Of Our Hooks

Remember the unique scenarios I talked about earlier? Let’s revisit those and see how our hooks have made them possible:

  • If the user wants to display WooCommerce products or posts from a particular category, then either they can use the filter myplugin_get_posts_args to add their own arguments for when the plugin queries posts, or they can use myplugin_get_posts to completely override the posts with their own list.
  • If the user wants to display the items in a carousel provided by another plugin, instead of as a simple list, then they can override the entire output of the function with myplugin_get_some_post_titles, and instead output a carousel from there.
  • If the user wants to perform a custom database query and then use that query’s posts in the list, then, similar to the first scenario, they can use myplugin_get_posts to use their own database query and change the post array.

Much better!

A Quick Example Of How To Use Our Filters

Developers can use add_filter to hook into our filters above (or use add_action for actions).

Taking our first scenario above, a developer can just do the following to display WooCommerce products using the myplugin_get_posts_args filter that we created:

add_filter( 'myplugin_get_posts_args', 'show_only_woocommerce_products' );
function show_only_woocommerce_products( $args ) 
   $args['post_type'] = 'product';
   return $args;

We Can Also Use Action Hooks

Aside from using apply_filters, we can also use do_action to make our code extensible. The difference between the two is that the first allows others to change a variable, while the latter allows others to execute additional functionality in various parts of our code.

When using actions, we’re essentially exposing the plugin’s flow to other developers and letting them perform other things in tandem.

It might not be useful in our example (because we are only displaying a shortcode), but it would be helpful in others. For example, given an extensible backup plugin, we could create a plugin that also uploads the backup file to a third-party service such as Dropbox.

“Great! But Why Should I Care About Making My Plugin Extensible?”

Well, if you’re still not sold on the idea, here are a few thoughts on why allowing other people to modify your plugin’s behavior is a good idea.

It Opens Up the Plugin to More Customization Possibilities

Everyone has different needs. And there’s a big chance your plugin won’t satisfy all of them, nor can you anticipate them. Opening up your plugin to allow for modifications to key areas of your plugin’s behavior can do wonders.

It Allows People to Introduce Modifications Without Touching the Plugin’s Code

Other developers won’t be forced to change your plugin’s files directly. This is a huge benefit because directly modifying a plugin’s file is generally bad practice. If the plugin gets updated, then all of your modifications will be wiped.

If we add our own hooks for other people to use, then the plugin’s modifications can be put in an external location — say, in another plugin. Done this way, the original plugin won’t be touched at all, and it can be freely updated without breaking anything, and all of the modifications in the other plugin would remain intact.

Conclusion

Extensible plugins are really awesome and give us room for a lot of customization possibilities. If you make your plugin extensible, your users and other developers will love you for it.

Take a look at plugins such as WooCommerce, Easy Digital Downloads and ACF. These plugins are extensible, and you can easily tell because numerous other plugins in WordPress’ plugins directory add functionality to them. They also provide a wide array of action and filter hooks that modify various aspects of the plugins. The rules of thumb I’ve enumerated above have come up in my study of them.

Here are a few takeaways to make your plugin extensible:

  • Follow the open/closed principle. Other people shouldn’t have to edit your code in order to modify something.
  • To make your plugin extensible, add hooks in these places:
    • around and within major processes,
    • when building the output HTML,
    • for altering post or database queries,
    • before returning values from a function.
  • Namespace your hooks’ names with the name of your plugin to prevent naming conflicts.
  • Try passing other variables that are related to the hook, so that other people get some context of what’s happening in the hook.
  • Don’t forget to document your plugin’s hooks, so that other people can learn of them.

Further Reading

Here are some resources if you want to learn more about extending plugins:

Smashing Editorial
(mc, ra, al, yk, il)

Original post:  

How To Make A WordPress Plugin Extensible