Monthly Web Development Update 5/2018: Browser Performance, Iteration Zero, And Web Authentication
As developers, we often talk about performance and request browsers to render things faster. But when they finally do, we demand even more performance.
Alex Russel from the Chrome team now shared some thoughts on developers abusing browser performance and explains why websites are still slow even though browsers reinvented themselves with incredibly fast rendering engines. This is in line with an article by Oliver Williams in which he states that we’re focusing on the wrong things, and instead of delivering the fastest solutions for slower machines and browsers, we’re serving even bigger bundles with polyfills and transpiled code to every browser.
It certainly isn’t easy to break out of this pattern and keep bundle size to a minimum in the interest of the user, but we have the technologies to achieve that. So let’s explore non-traditional ways and think about the actual user experience more often — before defining a project workflow instead of afterward.
Front-End Performance Checklist 2018
To help you cater for fast and smooth experiences, Vitaly Friedman summarized everything you need to know to optimize your site’s performance in one handy checklist. Read more →
Firefox 60 is out and brings ECMAScript Modules, as well as the Web Authentication API.
With Chrome 66 already released and the newest Firefox version coming up next, two major browsers are now distrusting all Symantec certificates that were issued before June 2016 — and trust me when I’m saying there are a lot of sites that still haven’t changed their affected certificates and, thus, will be out of reach for users now (Chrome) or very soon (Firefox).
The Windows 10 April update brought EdgeHTML 17 with mute tabs, autofill forms, a new “print website” mode to save resources, Service Workers and Push Notifications. Variable Fonts, Screen Capture in RTC via the Media Capture API, Subresource Integrity (SRI), and support for the Upgrade-Insecure-Requests header have also been added. Quite a step forward!
npm version 6 is here with some important security improvements. From now on, you not only have a new npm audit command to audit your depenencies for vulnerabilities, but npm will do this automatically and report back during dependency installs. The new version also comes with npm ci to make CI tasks faster and a couple of other improvements.
Node 10 is out with generators and async function support, full support for N-API and support for the Inspector protocol. It will become the next long-term support version in October.
Big news comes from Adobe regarding their Xd prototyping product: From now on, the software will be free for anyone with the new Starter Plan. The only differences to paid plans are limited storage, only one shared prototype (but as many non-shared as you want), and only the free Typekit library. The Xd team also improved the Sketch and Photoshop integrations, and you can now swap symbols, paste to multiple artboards, and protect design specs with a password, too.
The latest Firefox version comes with Web Authentication API support — a big step towards eliminating passwords. The API lets you log in via a hardware key like YubiKey if the browser and the web service both support the new technology. Notably, Chrome 67 beta is shipping the API as well already. Their team has written a technical implementation guide.
Starting from Firefox 60, we will be able to specify the same-site attribute for Cookies. This will allow a web application to advise the browser that cookies should only be sent if the request originates from the website the cookie came from. Read more details in the announcement blog post.
The GDPR Checklist is another helpful resource for people to check whether a website is compliant with the upcoming EU directive.
Postgres 10 has been here for quite a while already, but I personally struggled to find good information on how to use all these amazing features it brings along. Gabriel Enslein now shares Postgres 10 performance updates in a slide deck, shedding light on how to use the built-in JSON support, native partitioning for large datasets, hash index resiliency, and more.
Sam Thorogood shares how we can build a “native undo & redo for the web”, as used in many text editors, games, planning or graphical software and other occasions such as a drag and drop reordering. And while it’s not easy to build, the article explains the concepts and technical aspects to help us understand this complicated matter.
There’s a new way to implement element/container queries into your application: eqio is a tiny library using IntersectionObserver.
Work & Life
Johannes Seitz shares his thoughts about project management at the start of projects. He calls the method “Iteration Zero”. An interesting concept to understand the scope and risks of a project better at a time when you still don’t have enough experience with the project itself but need to build a roadmap to get things started.
Arestia Rosenberg shares why her number one advice for freelancers is to ‘lean into the moment’. It’s about doing work when you can and using your chance to do something else when you don’t feel you can work productively. In the end, the summary results in a happy life and more productivity. I’d personally extend this to all people who can do that, but, of course, it’s best applicable to freelancers indeed.
Ethan Marcotte elaborates on the ethical issues with Google Duplex that is designed to imitate human voice so well that people don’t notice if it’s a machine or a human being. While this sounds quite interesting from a technical point of view, it will push the debate about fake news much further and cause more struggle to differentiate between something a human said or a machine imitated.
I bet that most of you haven’t heard of Palantir yet. The company is funded by Peter Thiel and is a data-mining company that has the intention to collect as much data as possible about everybody in the world. It’s known to collaborate with various law enforcement authorities and even has connections to military services. What they do with data and which data they have from us isn’t known. My only hope right now is that this company will suffer a lot from the EU GDPR directive and that the European Union will try to stop their uncontrolled data collection. Facebook’s data practices are nothing compared to Palantir it seems.
Anton Sten shares his thoughts on Vanity Metrics, a common way to share numbers and statistics out of context. And since he realized what relevancy they have, he thinks differently about most of the commonly readable data such as investments or usage data of services now. Reading one number without having a context to compare it to doesn’t matter at all. We should keep that in mind.
We hope you enjoyed this Web Development Update. The next one is scheduled for Friday, June 15th. Stay tuned.
How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial
React Native is a young technology, already gaining popularity among developers. It is a great option for smooth, fast, and efficient mobile app development. High-performance rates for mobile environments, code reuse, and a strong community: These are just some of the benefits React Native provides.
In this guide, I will share some insights about the high-level capabilities of React Native and the products you can develop with it in a short period of time.
After reading this article, you should have all the necessary knowledge to create video/audio recording functionality with React Native.
Let’s get right to it.
Brief Description Of The Application
The application you will learn to develop is called a multimedia notebook. I have implemented part of this functionality in an online job board application for the film industry. The main goal of this mobile app is to connect people who work in the film industry with employers. They can create a profile, add a video or audio introduction, and apply for jobs.
The application consists of three main screens that you can switch between with the help of a tab navigator:
the audio recording screen,
the video recording screen,
a screen with a list of all recorded media and functionality to play back or delete them.
First, download Expo to your mobile phone. There are two options to open the project :
Open the link in the browser, scan the QR code with your mobile phone, and wait for the project to load.
Open the link with your mobile phone and click on “Open project using Expo”.
You can also open the app in the browser. Click on “Open project in the browser”. If you have a paid account on Appetize.io, visit it and enter the code in the field to open the project. If you don’t have an account, click on “Open project” and wait in an account-level queue to open the project.
However, I recommend that you download the Expo app and open this project on your mobile phone to check out all of the features of the video and audio recording app.
You can find the full code for the media recording app in the repository on GitHub.
Dependencies Used For App Development
As mentioned, the media recording app is developed with React Native and Expo.
You can see the full list of dependencies in the repository’s package.json file.
These are the main libraries used:
React-navigation, for navigating the application,
Redux, for saving the application’s state,
React-redux, which are React bindings for Redux,
Recompose, for writing the components’ logic,
Reselect, for extracting the state fragments from Redux.
Let’s look at the project’s structure:
src/index.js: root app component imported in the app.js file;
src/components: reusable components;
src/constants: global constants;
src/styles: global styles, colors, fonts sizes and dimensions.
src/utils: useful utilities and recompose enhancers;
src/screens: screens components;
src/store: Redux store;
src/navigation: application’s navigator;
src/modules: Redux modules divided by entities as modules/audio, modules/video, modules/navigation.
Let’s proceed to the practical part.
Create Audio Recording Functionality With React Native
First, it’s important to сheck the documentation for the Expo Audio API, related to audio recording and playback. You can see all of the code in the repository. I recommend opening the code as you read this article to better understand the process.
When launching the application for the first time, you’ll need the user’s permission for audio recording, which entails access to the microphone. Let’s use Expo.AppLoading and ask permission for recording by using Expo.Permissions (see the src/index.js) during startAsync.
Audio recordings are displayed on a seperate screen whose UI changes depending on the state.
First, you can see the button “Start recording”. After it is clicked, the audio recording begins, and you will find the current audio duration on the screen. After stopping the recording, you will have to type the recording’s name and save the audio to the Redux store.
My audio recording UI looks like this:
I can save the audio in the Redux store in the following format:
Let’s write the audio logic by using Recompose in the screen’s container src/screens/RecordAudioScreenContainer.
Before you start recording, customize the audio mode with the help of Expo.Audio.set.AudioModeAsync (mode), where mode is the dictionary with the following key-value pairs:
playsInSilentModeIOS: A boolean selecting whether your experience’s audio should play in silent mode on iOS. This value defaults to false.
allowsRecordingIOS: A boolean selecting whether recording is enabled on iOS. This value defaults to false. Note: When this flag is set to true, playback may be routed to the phone receiver, instead of to the speaker.
interruptionModeIOS: An enum selecting how your experience’s audio should interact with the audio from other apps on iOS.
shouldDuckAndroid: A boolean selecting whether your experience’s audio should automatically be lowered in volume (“duck”) if audio from another app interrupts your experience. This value defaults to true. If false, audio from other apps will pause your audio.
interruptionModeAndroid: An enum selecting how your experience’s audio should interact with the audio from other apps on Android.
onRecordingStatusUpdate is called every 500 milliseconds by default. To make the UI update valid, set the 200 milliseconds interval with the help of setProgressUpdateInterval:
After creating an instance of this class, call prepareToRecordAsync to record the audio.
recordingInstance.prepareToRecordAsync(options) loads the recorder into memory and prepares it for recording. It must be called before calling startAsync(). This method can be used if the recording instance has never been prepared.
The parameters of this method include such options for the recording as sample rate, bitrate, channels, format, encoder and extension. You can find a list of all recording options in this document.
In this case, let’s use Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY.
After the recording has been prepared, you can start recording by calling the method recordingInstance.startAsync().
Before creating a new recording instance, check whether it has been created before. The handler for beginning the recording looks like this:
Now you need to write a handler for the audio recording completion. After clicking the stop button, you have to stop the recording, disable it on iOS, receive and save the local URL of the recording, and set OnRecordingStatusUpdate and the recording instance to null:
You can play the audio on the screen with the saved audio notes. To start the audio playback, click one of the items on the list. Below, you can see the audio player that allows you to track the current position of playback, to set the playback starting point and to toggle the playing audio.
If you use the first method, you will need to call playbackObject.loadAsync(), which loads the media from source into memory and prepares it for playing, after creation of the instance.
The second method is a static convenience method to construct and load a sound. It сreates and loads a sound from source with the optional initialStatus, onPlaybackStatusUpdate and downloadFirst parameters.
The source parameter is the source of the sound. It supports the following forms:
a dictionary of the form uri: 'http://path/to/file' with a network URL pointing to an audio file on the web;
require('path/to/file') for an audio file asset in the source code directory;
The initialStatus parameter is the initial playback status. PlaybackStatus is the structure returned from all playback API calls describing the state of the playbackObject at that point of time. It is a dictionary with the key-value pairs. You can check all of the keys of the PlaybackStatus in the documentation.
onPlaybackStatusUpdate is a function taking a single parameter, PlaybackStatus. It is called at regular intervals while the media is in the loaded state. The interval is 500 milliseconds by default. In my application, I set it to 50 milliseconds interval for a proper UI update.
Before creating the sound instance, you will need to implement the onPlaybackStatusUpdate callback. First, add some props to the screen container:
Now, implement onPlaybackStatusUpdate. You will need to make several validations based on PlaybackStatus for a proper UI display:
soundCallback: props => (status) =>
else if (status.isLoaded)
const position = props.isSeeking()
const isPlaying = (props.isSeeking() );
After this, you have to implement a handler for the audio playback. If a sound instance is already created, you need to unload the media from memory by calling playbackInstance.unloadAsync() and clear OnPlaybackStatusUpdate:
Call the handler loadPlaybackInstance(true) by clicking the item in the list. It will automatically load and play the audio.
Let’s add the pause and play functionality (toggle playing) to the audio player. If audio is already playing, you can pause it with the help of playbackInstance.pauseAsync(). If audio is paused, you can resume playback from the paused point with the help of the playbackInstance.playAsync() method:
onTogglePlaying: props => () =>
if (props.playbackInstance() !== null)
When you click on the playing item, it should stop. If you want to stop audio playback and put it into the 0 playing position, you can use the method playbackInstance.stopAsync():
The audio player also allows you to rewind the audio with the help of the slider. When you start sliding, the audio playback should be paused with playbackInstance.pauseAsync().
After the sliding is complete, you can set the audio playing position with the help of playbackInstance.setPositionAsync(value), or play back the audio from the set position with playbackInstance.playFromPositionAsync(value):
After this, you can pass the props to the components MediaList and AudioPlayer (see the file src/screens/LibraryScreen/LibraryScreenView).
Video Recording Functionality With React Native
Let’s proceed to video recording.
We’ll use Expo.Camera for this purpose. Expo.Camera is a React component that renders a preview of the device’s front or back camera. Expo.Camera can also take photos and record videos that are saved to the app’s cache.
To record video, you need permission for access to the camera and microphone. Let’s add the request for camera access as we did with the audio recording (in the file src/index.js):
Video recording is available on the “Video Recording” screen. After switching to this screen, the camera will turn on.
You can change the camera type (front or back) and start video recording. During recording, you can see its general duration and can cancel or stop it. When recording is finished, you will have to type the name of the video, after which it will be saved in the Redux store.
Here is what my video recording UI looks like:
Let’s write the video recording logic by using Recompose on the container screen
Now, when calling toggleCameraType after clicking the button, the camera will switch from the front to the back.
Currently, we have access to the camera component via the reference, and we can start video recording with the help of cameraRef.recordAsync().
The method recordAsync starts recording a video to be saved to the cache directory.
Options (object) — a map of options:
quality (VideoQuality): Specify the quality of recorded video. Usage: Camera.Constants.VideoQuality[‘‘], possible values: for 16:9 resolution 2160p, 1080p, 720p, 480p (Android only) and for 4:3 (the size is 640×480). If the chosen quality is not available for the device, choose the highest one.
maxDuration (number): Maximum video duration in seconds.
maxFileSize (number): Maximum video file size in bytes.
mute (boolean): If present, video will be recorded with no sound.
recordAsync returns a promise that resolves to an object containing the video file’s URI property. You will need to save the file’s URI in order to play back the video hereafter. The promise is returned if stopRecording was invoked, one of maxDuration and maxFileSize is reached or the camera preview is stopped.
Because the ratio set for the camera component sides is 4:3, let’s set the same format for the video quality.
Here is what the handler for starting video recording looks like (see the full code of the container in the repository):
As previously mentioned, the Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback. That’s why you can create custom controls and use more advanced functionality with the Playback API.
Check out the video playback process:
See the full code for the application in the repository.
You can also install the app on your phone by using Expo and check out how it works in practice.
I hope you have enjoyed this article and have enriched your knowledge of React Native. You can use this audio and video recording tutorial to create your own custom-designed media player. You can also scale the functionality and add the ability to save media in the phone’s memory or on a server, synchronize media data between different devices, and share media with others.
As you can see, there is a wide scope for imagination. If you have any questions about the process of developing an audio or video recording app with React Native, feel free to drop a comment below.
Analyzing Your Company’s Social Media Presence With IBM Watson And Node.js
If you are unfamiliar with Machine Learning (ML) technology, it has existed in science fiction for many years and is finally reaching its maturity in our society. One of the first ML examples I saw as a kid was in Star Trek’s The Next Generation when Lieutenant Tasha Yar trains with her holographic opponent that learns how to fight and better defeat in future battles.
In today’s society, China has developed a “lane robot” that is a guard rail controlled by a computer system that can direct the flow of traffic into different lanes, increasing safety and improving traveling time. This is done automatically based on time of day and how much traffic is flowing in each direction.
Another example is Pittsburg unveiling AI traffic signals that automatically detect traffic patterns and alter the traffic lights on-the-fly. Each light is controlled independently to help reduce both the commuting time and the idling time of cars. According to the article, pilot tests have demonstrated a reduced travel time of 25% and idling time by over 40%. There are, of course, hundreds of other examples of ML technology that make intelligent decisions based on the content it consumes.
To accomplish today’s goal, I am going to demonstrate (using Node.js) how to perform a search with Twitter’s API to retrieve content that will be inputted into the ML algorithm to be analyzed. This way, you’ll be provided with characteristics about the users who wrote that specific content so that you can get a better understanding of your audience. The example application will be written using Node.js as the server.
It is beyond the scope of this article to demonstrate how to write an ML algorithm. Instead, to aid in the analysis, I will demonstrate how to use IBM’s Watson to help you understand the general personality of your social media audience.
What Is IBM Watson?
In 2011, Watson began as a computer system that attempted to index the (entire) Internet. It was originally programmed to answer questions posed in ordinary English. Watson competed and won on the TV show Jeopardy! claiming a $1,000,000 cash prize.
Watson was now a proven success.
With the fame of winning on Jeopardy!, IBM has continued to push Watson’s capabilities. Watson has evolved into an enterprise-level application that is focused on Artificial Intelligence (AI) which you can train to identify what you care about most allowing you to make smarter decisions automatically.
The suite of Watson’s services is divided into six high-level categories:
Conversation The services in this category allow you to build intelligent chatbot’s or a virtual customer service agent.
Knowledge This category is focused on teaching Watson how to interpret data to unlock hidden value and monitor trends.
Vision This service provides the ability to tag content inside an image that is used to train Watson to be able to automatically recognize the same pattern inside of other images.
Speech These services provide the ability to convert speech to text and the inverse, text to speech.
Language This category is split between translating one language to another as well as interpreting the text to predict what predefined category the text belongs to.
Empathy This category is devoted to understanding the content’s tone, personality, and emotional state. Inside this category is a service called “Personality Insights” that will be used in this article to predict the personality characteristics with the social media content we will provide it.
This article will be focusing on understanding the personality of the content that we will fetch from Twitter. However, as you can see, Watson provides many other AI features that you can explore to automate many other processes simply through training and content aggregation.
Personality Insights will analyze content and help you understand the habits and preferences at an individual level and at scale. This is called the ‘personality profile.’ The profile is split into two high-level groups: Personality characteristics and Consumption preferences. These groups are further broken down into more finite components.
Note: To help understand the high-level concepts (before we deep dive into the results), the Personality Insights documentation provides this helpful summary describing how the profile is inferred from the content you provide it.
The Personality Insights service infers personality characteristics based on three primary models:
The ‘Big Five’ personality characteristics represent the most widely used model for generally describing how a person engages with the world. The model includes five primary dimensions:
Openness Note: Each dimension has six facets that further characterize an individual according to the dimension.
Needs describe which aspects of a product will resonate with a person. The model includes twelve characteristic needs:
Values describe motivating factors that influence a person’s decision making. The model includes five values:
Based on the personality characteristics inferred from the input text, the service can also return an indication of the author’s consumption preferences. ‘Consumption preferences’ indicate the author’s likelihood to pursue different products, services, and activities. The service groups the individual preferences into eight categories:
Reading and learning
Health and activity
Each category contains from one to as many as a dozen individual preferences.
To be effective, Watson requires a minimum of a hundred words to provide an insight into the consumer’s personality. The more words provided, the better Watson can analyze and determine the consumer’s preference.
This means, if you wish to target individuals, you will need to collect more data than one or two tweets from a specific person. However, if a user writes a product review, blog post, email, or anything else related to your company, this could be analyzed on both an individual level and at scale.
To begin, let’s start by setting up the Personality Insights service to begin analyzing a real-world example.
Configuring The Personality Insights Service
Watson is an enterprise application but they offer a free, limited service. Once you’ve created an account and are logged in, you will need to add the Personality Insight service. IBM offers a Lite plan that is free. The Lite plan is limited to 1,000 API calls per month and is automatically deleted after 30 days — perfect for our demonstration.
Once the service has been added, we will need to retrieve the service’s credentials to perform API calls against it. From Watson’s Dashboard, your service should be displayed. After you’ve selected the service, you’ll find a link to view the Service credentials in the left-hand menu. You will need to create a new ‘Credential.’ A unique name is required and optional configuration parameters can be defaulted for this login. For now, we will leave the configuration options empty.
After you have created a credential, select the ‘View’ credentials link. This will display the API’s URL, your username, and password required to securely execute API calls. Save these somewhere safe as we will need them in the next step.
Testing The Personality Insights Service
To perform API calls, I am going to use Node.js. If you already have Node.js installed, you can move on to the next step; otherwise, follow the instructions to setup Node.js from the official download page.
To demonstrate how to use the Personality Insights, I am going to create a new Node.js project on my computer. With a command prompt open, navigate to the directory where your Node.js projects will be stored and create your new project:
To assist with making the API calls to Watson, I am going to leverage the NPM Package: Watson Developer Cloud Node.js SDK. This package can be installed via the command prompt:
npm install watson-developer-cloud --save
Before making the first call, the PersonalityInsightsV3 object needs to be instantiated with the credentials from the previous section. Begin by creating a new file called index.js that will contain the Node.js code.
Here is an example of configuring the class so it is ready to make API calls:
var PersonalityInsightsV3 = require(’watson-developer-cloud/personality-insights/v3’);
var personality_insights = new PersonalityInsightsV3(
The personality_insights variable is what we will use to interact with the API for the Personality Insights service. Let’s review how to execute a call and return a personality profile:
var fs = require(’fs’);
"content": "Some content that contains more than 100 words...",
}, (err, response) =>
if (err) throw err;
fs.writeFile("results.txt", JSON.stringify(response, null, 2), function(err)
if (err) throw err;
console.log("Results were saved!");
The profile function accepts an array of contentItems. The ‘content’ item contains the actual content with a few additional properties identifying additional information to help Watson interpret it.
When this is executed, the results are written to a text file (the results are too large to write in the console). The result is an object that contains the following high-level properties:
The count of words interpreted
The language that the content provided, e.g. (en).
Personality This is an array of the ‘Big Five’ personality characteristics (Openness, Conscientiousness, Extraversion, Agreeableness, and Emotional range). Each characteristic contains an overall percentile for that characteristic (e.g. 0.8100175318417588). To ascertain more detail, there is an array called children that provides more in-depth insight. For example, a child category under ‘Openness’ is ‘Adventurousness’ that contains its own percentile.
Needs This is an array of the twelve characteristics that define the aspects a person will resonate with a product (Excitement, Harmony, Curiosity, Ideal, Closeness, Self-expression, Liberty, Love, Practicality, Stability, Challenge, and Structure). Each characteristic contains a percentile of how the content was interpreted.
Values This is an array of the five characteristics that describe motivating factors that influence a person’s decision making (Self-transcendence / Helping others, Conservation / Tradition, Hedonism / Taking pleasure in life, Self-enhancement / Achieving success, and Open to change / Excitement). Each characteristic contains a percentile of how the content was interpreted.
Behavior This is an array that contains thirty-one elements. Each element provides a percentile of when the content was created. Seven of the elements define the days of the week (Sunday through Saturday). The remaining twenty-four elements define the hours of the day. This helps you understand when customer’s interact with your product.
consumption_preferences This is an array that contains eight different categories with as much as a twelve child categories providing a percentile of likelihood to pursue different products, services, and activities (Shopping, Music, Movies, Reading and learning, Health and activity, Volunteering, Environmental concern, and Entrepreneurship).
Warnings This is an array that provides messages if a problem was encountered interpreting the content provided.
Once you’ve created your application, you need to retrieve the authorization keys required to perform API calls. With your application created, navigate to the ‘Keys’ and ‘Access Tokens’ page. Since we are not performing API calls against users of Twitter, OAuth integration is not required. Instead, we need only the four following keys:
Access Token Secret
The last two keys need to be generated near the bottom of the ‘Keys’ and ‘Access Tokens’ page. With the keys, here is an example of searching for Tweets about #SmashingMagazine:
var Twitter = require(’twitter’);
var client = new Twitter(
client.get(’search/tweets’, q: ’#SmashingMagazine’ , function(error, tweets, response)
if(error) throw error;
The result of this code will log a list tweets about Smashing Magazine. For the purposes of this demonstration, the following fields are of interest to us:
These are the fields we will feed Watson.
Integrating Personality Insights With Twitter
With Twitter setup and Watson setup, it’s time to integrate the two together and see the results. To make it interesting, let’s search for #DonaldTrump to see what the world thinks about the President of the United States. Here is the code example to search Twitter, feed the results into Watson, and write the results to a text file:
var fs = require(’fs’);
var Twitter = require(’twitter’);
var client = new Twitter(
var PersonalityInsightsV3 = require(’watson-developer-cloud/personality-insights/v3’);
var personality_insights = new PersonalityInsightsV3(
client.get(’search/tweets’, q: ’#DonaldTrump’ , function(error, tweets, response)
if(error) throw error;
var contentItems = ;
// Loop through the tweets
for (var i = 0; i < tweets.statuses.length; i++)
var tweet = tweets.statuses[i];
"created": new Date(tweet.created_at).getTime(),
// Call Watson with the tweets
, (err, response) =>
if (err) throw err;
// Write the results to a file
fs.writeFile("results.txt", JSON.stringify(response, null, 2), function(err)
if (err) throw err;
console.log("Results were saved!");
Here is another CodePen of the formatted results that I received:
Once we’ve analyzed the ‘Openness’ trait of the ‘Big Five,’ we can infer the following:
Emotion is quite low at 13%
Imagination is average at 54%
Intellect is very high at 96%
Authority challenging is also quite high at 87%
The ‘Conscientiousness’ trait at a high-level is average at 46% compared with the ‘Openness’ high-level average of 88%. Whereas ‘Agreeableness’ is very low at only 25%. I guess people on Twitter don’t like to agree with Donald Trump.
Moving on to the ‘Needs.’ The sub-categories of ‘Curiosity’ and ‘Structure’ are in the 60 percentile compared to other categories being below the 10th percentile (Excitement, Harmony, etc.).
And finally, under ‘Values,’ the sub-category that stands out to me as interesting is the ‘Openness’ to ‘Change’ at an abysmal 6%.
Based on when you perform your search, your results may vary as the results are limited to the past seven days from executing the example.
From these results, I would determine that the average person who tweets about Donald Trump is quite intellectual, challenges authority, and is not open to change.
With these results, it would allow you to automatically alter how you would target your content towards your audience to match the results received. You will need to determine what categories are of interest and what percentiles do you wish to target. With this ammunition, you can begin automating.
What Else Can I Do With Watson?
As I mentioned at the beginning of this article, Watson offers many other different services. With these services, you could automate many different parts of common business processes. For example:
Building a chat bot that can intelligently answer questions based on a knowledge base of information;
Build an application where you dictate what you want written to Watson by using the speech to text functionality;
Automatically translate your content into different languages to create a multi-lingual site or knowledge base;
Teach Watson how to look for specific patterns in images. This could be used to determine if a logo is embedded into a photo.
This, of course, is a very small subset that my limited imagination can postulate. I’m sure you can think of many other ways to leverage Watson’s immense capabilities.
If you are looking for more examples, IBM has an entire GitHub repository dedicated to their Node.js SDK. The example folder contains over ten sample applications that convert speech to text, text to speech, tone analysis, and visual recognition to name just a few.
Before Watson can runaway with technological growth, resulting in the singularity where Artificial Intelligence destroys mankind, this article demonstrated how you can turn social media content into a powerful understanding of how the people creating the content think. Using the results from Watson, your application can use the categories of interest where the percentile exceeds or is less than a predetermined amount to change how you target your audience.
If you have other interesting uses of Watson or how you are using the Personality Insights, be sure to leave a comment below.
Conditioner And Progressive Enhancement Sitting In A Tree
Before we proceed, I need to get one thing across:
Conditioner is not a framework for building web apps.
Instead, it’s aimed at websites. The distinction between websites and web apps is useful for the continuation of this story. Let me explain how I view the overall difference between the two.
Examples of content-oriented websites are for instance: Wikipedia, Smashing Magazine, your local municipality website, newspapers, and webshops. Web apps are often found in the utility area, think of web-based email clients and online maps. While also presenting content, the focus of web apps is often more on interacting with content than presenting content. There’s a huge grey area between the two, but this contrast will help us decide when Conditioner might be effective and when we should steer clear.
As stated earlier, Conditioner is all about websites, and it’s specifically built to deal with that third act:
The Troublesome Third Act
A class is added to an HTML element.
The querySelectorAll method is used to get all elements assigned the class.
A for-loop traverses the NodeList returned in step 2.
Let’s quickly put this workflow in code by adding autocomplete functionality to an input field. We’ll create a file called autocomplete.js and add it to the page using a <script> tag.
// our autocomplete logic
<input type="text" class="autocomplete"/>
var inputs = document.querySelectorAll('.autocomplete');
for (var i = 0; i < inputs.length; i++)
Suppose we’re now told to add another functionality to the page, say a date picker, it’s initialization will most likely follow the same pattern. Now we’ve got two for-loops. Add another functionality, and you’ve got three, and so on and so on. Not the best.
While this works and keeps you off the street, it creates a host of problems. We’ll have to add a loop to our initialization script for each functionality we add. For each loop we add, the initialization script gets linked ever tighter to the document structure of our website. Often the initialization script will be loaded on each page. Meaning all the querySelectorAll calls for all the different functionalities will be run on each and every page whether functionality is defined on the page or not.
For me, this setup never felt quite right. It always started out “okay,” but then it would slowly grow to a long list of repetitive for-loops. Depending on the project it might contain some conditional logic here and there to determine if something loads on a certain viewport or not.
if (window.innerWidth <= 480)
// small viewport for-loops here
Eventually, my initialization script would always grow out of control and turn into a giant pile of spaghetti code that I would not wish on anyone.
Something needed to be done.
That stack of spaghetti loops though, I wanted to get rid them so badly.
We’ll quickly update our script to use data attributes instead of classes.
<input type="text" data-module="autocomplete">
var inputs = document.querySelectorAll('[data-module=autocomplete]');
for (var i = 0; i < inputs.length; i++)
But hang on, this is nearly the same setup; we’ve only replaced .autocomplete with [data-module=autocomplete]. How’s that any better? It’s not, you’re right. If we add an additional functionality to the page, we still have to duplicate our for-loop — blast! Don’t be sad though as this is the stepping stone to our killer for-loop.
Watch what happens when we make a couple of adjustments.
<input type="text" data-module="createAutocomplete">
var elements = document.querySelectorAll('[data-module]');
for (var i = 0; i < elements.length; i++)
var name = elements[i].getAttribute('data-module');
var factory = window[name];
Now we can load any functionality with a single for-loop.
Find all elements on the page with a data-module attribute;
Loop over the node list;
Get the name of the module from the data-module attribute;
This basic setup has some other advantages as well:
The init script no longer needs to know what it loads; it just needs to be very good at this one little trick.
The init script does not search for modules that are not there, i.e. no wasted DOM searches.
The init script is done. No more adjustments are needed. When we add functionality to the page, it will automatically be found and will simply work.
So What About This Thing Called Conditioner?
We finally have our single loop, our one loop to rule all other loops, our king of loops, our hyper-loop. Ehm. Okay. We’ll just have to conclude that our is a loop of high quality and is so flexible that it can be re-used in each project (there’s not really anything project specific about it). That does not immediately make it library-worthy, it’s still quite a basic loop. However, we’ll find that our loop will require some additional trickery to really cover all our use-cases.
With the one loop, we are now loading our functionality automatically.
We assign a data-module attribute to an element.
We add a <script> tag to the page referencing our functionality.
The loop matches the right functionality to each element.
Let’s take a look at what we need to add to our loop to make it a bit more flexible and re-usable. Because as it is now, while amazing, we’re going to run into trouble.
It would be handy if we moved the global functions to isolated modules. This prevents pollution of the global scope. Makes our modules more portable to other projects. And we’ll no longer have to add our <script> tags manually. Fewer things to add to the page, fewer things to maintain.
When using our portable modules across multiple projects (and/or pages) we’ll probably encounter a situation where we need to pass configuration options to a module. Think API keys, labels, animation speeds. That’s a bit difficult at the moment as we can’t access the for-loop.
With the ever-growing diversity of devices out there we will eventually encounter a situation where we only want to load a module in a certain context. For instance, a menu that needs to be collapsed on small viewports. We don’t want to add if-statements to our loop. It’s beautiful as it is, we will not add if statements to our for-loop. Never.
That’s where Conditioner can help out. It encompasses all above functionality. On top of that, it exposes a plugin API so we can configure and expand Conditioner to exactly fit our project setup.
Let’s make that 1 Kilobyte jump and replace our initialization loop with Conditioner.
Switching To Conditioner
We can get the Conditioner library from the GitHub repository, npm or from unpkg. For the rest of the article, we’ll assume the Conditioner script file has been added to the page.
Conditioner will now automatically lazy load ./autocomplete.js, and once received, it will call the module.default function and pass the element as a parameter.
Defining our autocomplete as ./autocomplete.js is very verbose. It’s difficult to read, and when adding multiple modules on the page, it quickly becomes tedious to write and error prone.
This can be remedied by overriding the moduleSetName action. Conditioner views the data-module value as an alias and will only use the value returned by moduleSetName as the actual module name. Let’s automatically add the js extension and relative path prefix to make our lives a bit easier.
<input type="text" data-module="autocomplete"/>
// converts module aliases to paths
moduleSetName: (name) => `./$ name .js`
Now we can set data-module to autocomplete instead of ./autocomplete.js, much better.
That’s it! We’re done! We’ve setup Conditioner to load ES Modules. Adding modules to a page is now as easy as creating a module file and adding a data-module attribute.
The plugin architecture makes Conditioner super flexible. Because of this flexibility, it can be modified for use with a wide range of module loaders and bundlers. There’s bootstrap projects available for Webpack, Browserify and RequireJS.
Please note that Conditioner does not handle module bundling. You’ll have to configure your bundler to find the right balance between serving a bundled file containing all modules or a separate file for each module. I usually cherry pick tiny modules and core UI modules (like navigation) and serve them in a bundled file while conditionally loading all scripts further down the page.
Alright, module loading — check! It’s now time to figure out how to pass configuration options to our modules. We can’t access our loop; also we don’t really want to, so we need to figure out how to pass parameters to the constructor functions of our modules.
Passing Configuration Options To Our Modules
I might have bent the truth a little bit. Conditioner has no out-of-the-box solution for passing options to modules. There I said it. To keep Conditioner as tiny as possible I decided to strip it and make it available through the plugin API. We’ll explore some other options of passing variables to modules and then use the plugin API to set up an automatic solution.
The easiest and at the same time most banal way to create options that our modules can access is to define options on the global window scope.
We’ve only eliminated the dataset call, i.e. seven characters. Not the biggest improvement, but we’ve opened the door to take this a bit further.
Suppose we have multiple autocomplete modules on the page, and each and every single one of them requires the same API key. It would be handy if that API key was supplied automagically instead of having to add it as a data attribute on each element.
We can improve our developer lives by adding a page level configuration object.
const pageOptions =
// the module alias
key: 'abc123' // api key
// the name of the module and the element it's being mounted to
moduleSetConstructorArguments: (name, element) => ([
// merge the default page options with the options set on the element it self
As our pageOptions variable has been defined with const it’ll be block-scoped, which means it won’t pollute the global scope. Nice.
Using Object.assign we merge an empty object with both the pageOptions for this module and the dataset DOMStringMap found on the element. This will result in an options object containing both the source property and the key property. Should one of the autocomplete elements on the page have a data-key attribute, it will override the pageOptions default key for that element.
That’s some top-notch developer convenience right there.
By having added this tiny plugin, we can automatically pass options to our modules. This makes our modules more flexible and therefore re-usable over multiple projects. We can still choose to opt-out and use dataset or globally scope our configuration variables (no, don’t), whatever fits best.
Our next challenge is the conditional loading of modules. It’s actually the reason why Conditioner is named Conditioner. Welcome to the inner circle!
Conditionally Loading Modules Based On User Context
Back in 2005, desktop computers were all the rage, everyone had one, and everyone browsed the web with it. Screen resolutions ranged from big to bigger. And while users could scale down their browser windows, we looked the other way and basked in the glory of our beautiful fixed-width sites.
I’ve rendered an artist impression of the 2005 viewport:
I’ve applied this knowledge to our artist impression below.
Holy smokes! That’s a lot of viewports.
Today, someone might visit your site on a small mobile device connected to a crazy fast WiFi hotspot, while another user might access your site using a desktop computer on a slow tethered connection. Yes, I switched up the connection speeds — reality is unpredictable.
And to think we were worried about users resizing their browser window. Hah!
Note that those million viewports are not set in stone. A user might load a website in portrait orientation and then rotate the device, (or, resize the browser window), all without reloading the page. Our websites should be able to handle this and load or unload functionality accordingly.
With Conditioner in place, let’s configure it as a gatekeeper and have it load modules based on the current user context. The user context contains information about the environment in which the user is interacting with your functionality. Some examples of environment variables influencing context are viewport size, time of day, location, and battery level. The user can also supply you with context hints, for instance, a preference for reduced motion. How a user behaves on your platform will also tell you something about the context she might be in, is this a recurring visit, how long is the current user session?
The better we’re able to measure these environment variables the better we can enhance our interface to be appropriate for the context the user is in.
We’ll need an attribute to describe our modules context requirements so Conditioner can determine the right moment for the module to load and to unload. We’ll call this attribute data-context. It’s pretty straightforward.
Let’s leave our lovely autocomplete module behind and shift focus to a new module. Our new section-toggle module will be used to hide the main navigation behind a toggle button on small viewports.
Since it should be possible for our section-toggle to be unloaded, the default function returns another function. Conditioner will call this function when it unloads the module.
We don’t need the toggle behavior on big viewports as those have plenty of space for our menu (it’s a tiny menu). We only want to collapse our menu on viewports more narrow than 30em (this translates to 480px).
The data-context attribute will trigger Conditioner to automatically load a context monitor observing the media query (max-width:30em). When the user context matches this media query, it will load the module; when it does not, or no longer does, it will unload the module.
Monitoring happens based on events. This means that after the page has loaded, should the user resize the viewport or rotate the device, the user context is re-evaluated and the module is loaded or unloaded based on the new observations.
You can view monitoring as feature detection. Where feature detection is about an on/off situation, the browser either supports WebGL, or it doesn’t. Context monitoring is a continuous process, the initial state is observed at page load, but monitoring continues after. While the user is navigating the page, the context is monitored, and observations can influence page state in real-time.
The media query monitor is the only monitor that is available by default. Adding your own custom monitors is possible using the plugin API. Let’s add a visible monitor which we’ll use to determine if an element is visible to the user (scrolled into view). To do this, we’ll use the brand new IntersectionObserver API.
// the monitor hook expects a configuration object
// the name of our monitor with the '@'
// the create method will return our monitor API
create: (context, element) => (
// current match state
// called by conditioner to start listening for changes
new IntersectionObserver(entries =>
// update the matches state
this.matches = entries.pop().isIntersecting == context;
// inform Conditioner of the state change
We now have a visible monitor at our disposal.
Let’s use this monitor to only load images when they are scrolled in to view.
A red cat eating a yellow bird
The lazyImage module will extract the link text, create an image element, and set the link text to the alt text of the image.
export default (element) =>
// store original link text
const text = element.textContent;
// replace element text with image
const image = new Image();
image.src = element.href;
return () =>
// restore original element state
element.innerHTML = text
When the anchor is scrolled into view, the link text is replaced with an img tag.
Because we’ve returned an unload function the image will be removed when the element scrolls out of view. This is most likely not what we desire.
We can remedy this behavior by adding the was operator. It will tell Conditioner to retain the first matched state.
A red cat eating a yellow bird
There are three other operators at our disposal.
The not operator lets us invert a monitor result. Instead of writing @visible false we can write not @visible which makes for a more natural and relaxed reading experience.
Last but not least, we can use the or and and operators to string monitors together and form complex context requirements. Using and combined with or we can do lazy image loading on small viewports and load all images at once on big viewports.
data-context="was @visible and @media (max-width:30em) or @media (min-width:30em)">
A red cat eating a yellow bird
We’ve looked at the @media monitor and have added our custom @visible monitor. There are lots of other contexts to measure and custom monitors to build:
Tap into the Geolocation API and monitor the location of the user @location (near: 51.4, 5.4) to maybe load different scripts when a user is near a certain location.
Imagine a @time monitor, which would make it possible to enhance a page dynamically based on the time of day @time (after 20:00).
By moving context monitoring outside of our modules, our modules have become even more portable. If we need to add collapsible sections to one of our pages, it’s now easy to re-use our section toggle module, because it’s not aware of the context in which it’s used. It just wants to be in charge of toggling something.
And this is what Conditioner makes possible, it extracts all distractions from the module and allows you to write a module focused on a single task.
Conditioner exposes a total of three methods. We’ve already encountered the hydrate and addPlugin methods. Let’s now have a look at the monitor method.
The monitor method lets us manually monitor a context and receive context updates.
const monitor = conditioner.monitor('@media (min-width:30em)');
monitor.onchange = (matches) =>
// called when a change to the context was observed
As a quick example, I’ve built a React <ContextRouter> component that uses Conditioner to monitor user context queries and switch between views. It’s heavily inspired by React Router so might look familiar.
<Context query="@media (min-width:30em)"
component= FancyInfoGraphic />
// fallback to use on smaller viewports
I hope someone out there is itching to convert this to Angular. As a cat and React person I just can’t get myself to do it.
Replacing our initialization script with the killer for loop created a single entity in charge of loading modules. From that change, automatically followed a set of requirements. We used Conditioner to fulfill these requirements and then wrote custom plugins to extend Conditioner where it didn’t fit our needs.
Not having access to our single for loop, steered us towards writing more re-usable and flexible modules. By switching to dynamic imports we could then lazy load these modules, and later load them conditionally by combining the lazy loading with context monitoring.
With conditional loading, we can quickly determine when to send which module over the connection, and by building advanced context monitors and queries, we can target more specific contexts for enhancement.
By combining all these tiny changes, we can speed up page load time and more closely match our functionality to each different context. This will result in improved user experience and as a bonus improve our developer experience as well.
So you’ve trained yourself as a web engineer, and now want to build a blazing fast online shop for your customers. The product list should appear in an instant, and searching should waste no more than a split second either. Is that the stuff of daydreams?
Once upon a time, there lived a web developer who successfully convinced his customers that sites should not look the same in all browsers, cared about accessibility, and was an early adopter of CSS grids. But deep down in his heart it was performance that was his true passion: He constantly optimized, minified, monitored, and even employed psychological tricks in his projects.
Then, one day, he learned about lazy-loading images and other assets that are not immediately visible to users and are not essential for rendering meaningful content on the screen.
I hope you had a great start into the new year. And while it’s quite an arbitrary date, many of us take the start of the year as an opportunity to try to change something in their lives. I think it’s well worth doing so, and I wish you the best of luck for accomplishing your realistic goals. I for my part want to start working on my mindfulness, on being able to focus, and on pursuing my dream of building an ethically correct, human company with Colloq that provides real value to users and is profitable by its users.
There’s a high chance you came across the term “REST API” if you’ve thought about getting data from another source on the internet, such as Twitter or Github. But what is a REST API? What can it do for you? How do you use it?
In this article, you’ll learn everything you need to know about REST APIs to be able to read API documentations and use them effectively.