Tag Archives: api

Thumbnail

Monthly Web Development Update 5/2018: Browser Performance, Iteration Zero, And Web Authentication




Monthly Web Development Update 5/2018: Browser Performance, Iteration Zero, And Web Authentication

Anselm Hannemann



As developers, we often talk about performance and request browsers to render things faster. But when they finally do, we demand even more performance.

Alex Russel from the Chrome team now shared some thoughts on developers abusing browser performance and explains why websites are still slow even though browsers reinvented themselves with incredibly fast rendering engines. This is in line with an article by Oliver Williams in which he states that we’re focusing on the wrong things, and instead of delivering the fastest solutions for slower machines and browsers, we’re serving even bigger bundles with polyfills and transpiled code to every browser.

It certainly isn’t easy to break out of this pattern and keep bundle size to a minimum in the interest of the user, but we have the technologies to achieve that. So let’s explore non-traditional ways and think about the actual user experience more often — before defining a project workflow instead of afterward.

Front-End Performance Checklist 2018

To help you cater for fast and smooth experiences, Vitaly Friedman summarized everything you need to know to optimize your site’s performance in one handy checklist. Read more →

News

General

  • Oliver Williams wrote about how important it is that we rethink how we’re building websites and implement “progressive enhancement” to make the web work great for everyone. After all, it’s us who make the experience worse for our users when blindly transpiling all our ECMAScript code or serving tons of JavaScript polyfills to those who already use slow machines and old software.
  • Ian Feather reveals that around 1% of all requests for JavaScript on BuzzFeed time out. That’s about 13 million requests per month. A good reminder of how important it is to provide a solid fallback, progressive enhancement, and workarounds.
  • The new GDPR (General Data Protection Regulation) directive is coming very soon, and while our inboxes are full of privacy policy updates, one thing that’s still very unclear is which services can already provide so-called DPAs (Data Processing Agreements). Joschi Kuphal collects services that offer a DPA, so that we can easily look them up and see how we can obtain a copy in order to continue using their services. You can help by contributing to this resource via Pull Requests.

UI/UX

Product design principles
How to create a consistent, harmonious user experience when designing product cards? Mei Zhang shares some valuable tips. (Image credit)

Security

Privacy

  • The GDPR Checklist is another helpful resource for people to check whether a website is compliant with the upcoming EU directive.
  • Bloomberg published a story about the open-source privacy-protection project pi-hole, why it exists and what it wants to achieve. I use the software daily to keep my entire home and work network tracking-free.
GDPR Compliance Checklist
Achieving GDPR Compliance shouldn’t be a struggle. The GDPR Compliance Checklist helps you see clearer. (Image credit)

Web Performance

  • Postgres 10 has been here for quite a while already, but I personally struggled to find good information on how to use all these amazing features it brings along. Gabriel Enslein now shares Postgres 10 performance updates in a slide deck, shedding light on how to use the built-in JSON support, native partitioning for large datasets, hash index resiliency, and more.
  • Andrew Betts found out that a lot of websites are using outdated headers. He now shares why we should drop old headers and which ones to serve instead.

Accessibility

Page previews
Page previews open possibilities in multiple areas, as Nirzar Pangarkar explains. (Image credit: Nirzar Pangarkar)

CSS

  • Rarely talked about for years, CSS tables are still used on most websites to show (and that’s totally the correct way to do so) data in tables. But as they’re not responsive by default, we always struggled when making them responsive and most of us used JavaScript to make them work on mobile screens. Lea Verou now found two new ways to achieve responsive tables by using CSS: One is to use text-shadow to copy text to other rows, the other one uses element() to copy the entire <thead> to other rows — I still try to understand how Lea found these solutions, but this is amazing!
  • Rachel Andrew wrote an article about building and providing print stylesheets in 2018 and why they matter a lot for users even if they don’t own a printer anymore.
  • Osvaldas Valutis shares how to implement the so-called “Priority Plus” navigation pattern mostly with CSS, at least in modern browsers. If you need to support older browsers, you will need to extend this solution further, but it’s a great start to implement such a pattern without too much JavaScript.
  • Rachel Andrew shares what’s coming up in the CSS Grid Level 2 and Subgrid specifications and explains what it is, what it can solve, and how to use it once it is available in browsers.

JavaScript

  • Chris Ashton “used the web for a day with JavaScript turned off.” This piece highlights the importance of thinking about possible JavaScript failures on websites and why it matters if you provide fallbacks or not.
  • Sam Thorogood shares how we can build a “native undo & redo for the web”, as used in many text editors, games, planning or graphical software and other occasions such as a drag and drop reordering. And while it’s not easy to build, the article explains the concepts and technical aspects to help us understand this complicated matter.
  • There’s a new way to implement element/container queries into your application: eqio is a tiny library using IntersectionObserver.

Work & Life

  • Johannes Seitz shares his thoughts about project management at the start of projects. He calls the method “Iteration Zero”. An interesting concept to understand the scope and risks of a project better at a time when you still don’t have enough experience with the project itself but need to build a roadmap to get things started.
  • Arestia Rosenberg shares why her number one advice for freelancers is to ‘lean into the moment’. It’s about doing work when you can and using your chance to do something else when you don’t feel you can work productively. In the end, the summary results in a happy life and more productivity. I’d personally extend this to all people who can do that, but, of course, it’s best applicable to freelancers indeed.
  • Sam Altman shares a couple of handy productivity tips that are not just a ‘ten things to do’ list but actually really helpful thoughts about how to think about being productive.

Going Beyond…

  • Ethan Marcotte elaborates on the ethical issues with Google Duplex that is designed to imitate human voice so well that people don’t notice if it’s a machine or a human being. While this sounds quite interesting from a technical point of view, it will push the debate about fake news much further and cause more struggle to differentiate between something a human said or a machine imitated.
  • Our world is actually built on promises, and here’s why it’s so important to stick to your promises even if it’s hard sometimes.
  • I bet that most of you haven’t heard of Palantir yet. The company is funded by Peter Thiel and is a data-mining company that has the intention to collect as much data as possible about everybody in the world. It’s known to collaborate with various law enforcement authorities and even has connections to military services. What they do with data and which data they have from us isn’t known. My only hope right now is that this company will suffer a lot from the EU GDPR directive and that the European Union will try to stop their uncontrolled data collection. Facebook’s data practices are nothing compared to Palantir it seems.
  • Researchers sound the alarm after an analysis showed that buying a new smartphone consumes as much energy as using an existing phone for an entire decade. I guess I’ll not replace my iPhone 7 anytime soon — it’s still an absolutely great device and just enough for what I do with it.
  • Anton Sten shares his thoughts on Vanity Metrics, a common way to share numbers and statistics out of context. And since he realized what relevancy they have, he thinks differently about most of the commonly readable data such as investments or usage data of services now. Reading one number without having a context to compare it to doesn’t matter at all. We should keep that in mind.

We hope you enjoyed this Web Development Update. The next one is scheduled for Friday, June 15th. Stay tuned.

Smashing Editorial
(cm)


See the original article here: 

Monthly Web Development Update 5/2018: Browser Performance, Iteration Zero, And Web Authentication

Thumbnail

How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial




How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial

Oleh Mryhlod



React Native is a young technology, already gaining popularity among developers. It is a great option for smooth, fast, and efficient mobile app development. High-performance rates for mobile environments, code reuse, and a strong community: These are just some of the benefits React Native provides.

In this guide, I will share some insights about the high-level capabilities of React Native and the products you can develop with it in a short period of time.

We will delve into the step-by-step process of creating a video/audio recording app with React Native and Expo. Expo is an open-source toolchain built around React Native for developing iOS and Android projects with React and JavaScript. It provides a bunch of native APIs maintained by native developers and the open-source community.

After reading this article, you should have all the necessary knowledge to create video/audio recording functionality with React Native.

Let’s get right to it.

Brief Description Of The Application

The application you will learn to develop is called a multimedia notebook. I have implemented part of this functionality in an online job board application for the film industry. The main goal of this mobile app is to connect people who work in the film industry with employers. They can create a profile, add a video or audio introduction, and apply for jobs.

The application consists of three main screens that you can switch between with the help of a tab navigator:

  • the audio recording screen,
  • the video recording screen,
  • a screen with a list of all recorded media and functionality to play back or delete them.

Check out how this app works by opening this link with Expo.

First, download Expo to your mobile phone. There are two options to open the project :

  1. Open the link in the browser, scan the QR code with your mobile phone, and wait for the project to load.
  2. Open the link with your mobile phone and click on “Open project using Expo”.

You can also open the app in the browser. Click on “Open project in the browser”. If you have a paid account on Appetize.io, visit it and enter the code in the field to open the project. If you don’t have an account, click on “Open project” and wait in an account-level queue to open the project.

However, I recommend that you download the Expo app and open this project on your mobile phone to check out all of the features of the video and audio recording app.

You can find the full code for the media recording app in the repository on GitHub.

Dependencies Used For App Development

As mentioned, the media recording app is developed with React Native and Expo.

You can see the full list of dependencies in the repository’s package.json file.

These are the main libraries used:

  • React-navigation, for navigating the application,
  • Redux, for saving the application’s state,
  • React-redux, which are React bindings for Redux,
  • Recompose, for writing the components’ logic,
  • Reselect, for extracting the state fragments from Redux.

Let’s look at the project’s structure:


Large preview

  • src/index.js: root app component imported in the app.js file;
  • src/components: reusable components;
  • src/constants: global constants;
  • src/styles: global styles, colors, fonts sizes and dimensions.
  • src/utils: useful utilities and recompose enhancers;
  • src/screens: screens components;
  • src/store: Redux store;
  • src/navigation: application’s navigator;
  • src/modules: Redux modules divided by entities as modules/audio, modules/video, modules/navigation.

Let’s proceed to the practical part.

Create Audio Recording Functionality With React Native

First, it’s important to сheck the documentation for the Expo Audio API, related to audio recording and playback. You can see all of the code in the repository. I recommend opening the code as you read this article to better understand the process.

When launching the application for the first time, you’ll need the user’s permission for audio recording, which entails access to the microphone. Let’s use Expo.AppLoading and ask permission for recording by using Expo.Permissions (see the src/index.js) during startAsync.

Await Permissions.askAsync(Permissions.AUDIO_RECORDING);

Audio recordings are displayed on a seperate screen whose UI changes depending on the state.

First, you can see the button “Start recording”. After it is clicked, the audio recording begins, and you will find the current audio duration on the screen. After stopping the recording, you will have to type the recording’s name and save the audio to the Redux store.

My audio recording UI looks like this:


Large preview

I can save the audio in the Redux store in the following format:

audioItemsIds: [‘id1’, ‘id2’],
audioItems: 
 ‘id1’: 
    id: string,
    title: string,
    recordDate: date string,
    duration: number,
    audioUrl: string,
 
},

Let’s write the audio logic by using Recompose in the screen’s container src/screens/RecordAudioScreenContainer.

Before you start recording, customize the audio mode with the help of Expo.Audio.set.AudioModeAsync (mode), where mode is the dictionary with the following key-value pairs:

  • playsInSilentModeIOS: A boolean selecting whether your experience’s audio should play in silent mode on iOS. This value defaults to false.
  • allowsRecordingIOS: A boolean selecting whether recording is enabled on iOS. This value defaults to false. Note: When this flag is set to true, playback may be routed to the phone receiver, instead of to the speaker.
  • interruptionModeIOS: An enum selecting how your experience’s audio should interact with the audio from other apps on iOS.
  • shouldDuckAndroid: A boolean selecting whether your experience’s audio should automatically be lowered in volume (“duck”) if audio from another app interrupts your experience. This value defaults to true. If false, audio from other apps will pause your audio.
  • interruptionModeAndroid: An enum selecting how your experience’s audio should interact with the audio from other apps on Android.

Note: You can learn more about the customization of AudioMode in the documentation.

I have used the following values in this app:

interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX, — Our record interrupts audio from other apps on IOS.

playsInSilentModeIOS: true,

shouldDuckAndroid: true,

interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX — Our record interrupts audio from other apps on Android.

allowsRecordingIOS Will change to true before the audio recording and to false after its completion.

To implement this, let’s write the handler setAudioMode with Recompose.

withHandlers(
 setAudioMode: () => async ( allowsRecordingIOS ) => 
   try 
     await Audio.setAudioModeAsync(
       allowsRecordingIOS,
       interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX,
       playsInSilentModeIOS: true,
       shouldDuckAndroid: true,
       interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
     );
   } catch (error) 
     console.log(error) // eslint-disable-line
   
 },
}),

To record the audio, you’ll need to create an instance of the Expo.Audio.Recording class.

const recording = new Audio.Recording();

After creating the recording instance, you will be able to receive the status of the Recording with the help of recordingInstance.getStatusAsync().

The status of the recording is a dictionary with the following key-value pairs:

  • canRecord: a boolean.
  • isRecording: a boolean describing whether the recording is currently recording.
  • isDoneRecording: a boolean.
  • durationMillis: current duration of the recorded audio.

You can also set a function to be called at regular intervals with recordingInstance.setOnRecordingStatusUpdate(onRecordingStatusUpdate).

To update the UI, you will need to call setOnRecordingStatusUpdate and set your own callback.

Let’s add some props and a recording callback to the container.

withStateHandlers(
    recording: null,
    isRecording: false,
    durationMillis: 0,
    isDoneRecording: false,
    fileUrl: null,
    audioName: '',
  , 
    setState: () => obj => obj,
    setAudioName: () => audioName => ( audioName ),
   recordingCallback: () => ( durationMillis, isRecording, isDoneRecording ) =>
      ( durationMillis, isRecording, isDoneRecording ),
  }),

The callback setting for setOnRecordingStatusUpdate is:

recording.setOnRecordingStatusUpdate(props.recordingCallback);

onRecordingStatusUpdate is called every 500 milliseconds by default. To make the UI update valid, set the 200 milliseconds interval with the help of setProgressUpdateInterval:

recording.setProgressUpdateInterval(200);

After creating an instance of this class, call prepareToRecordAsync to record the audio.

recordingInstance.prepareToRecordAsync(options) loads the recorder into memory and prepares it for recording. It must be called before calling startAsync(). This method can be used if the recording instance has never been prepared.

The parameters of this method include such options for the recording as sample rate, bitrate, channels, format, encoder and extension. You can find a list of all recording options in this document.

In this case, let’s use Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY.

After the recording has been prepared, you can start recording by calling the method recordingInstance.startAsync().

Before creating a new recording instance, check whether it has been created before. The handler for beginning the recording looks like this:

onStartRecording: props => async () => 
      try 
        if (props.recording) 
          props.recording.setOnRecordingStatusUpdate(null);
          props.setState( recording: null );
        }

        await props.setAudioMode( allowsRecordingIOS: true );

        const recording = new Audio.Recording();
        recording.setOnRecordingStatusUpdate(props.recordingCallback);
        recording.setProgressUpdateInterval(200);

        props.setState( fileUrl: null );

await recording.prepareToRecordAsync(Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY);
        await recording.startAsync();

        props.setState( recording );
      } catch (error) 
        console.log(error) // eslint-disable-line
      
    },

Now you need to write a handler for the audio recording completion. After clicking the stop button, you have to stop the recording, disable it on iOS, receive and save the local URL of the recording, and set OnRecordingStatusUpdate and the recording instance to null:

onEndRecording: props => async () => 
      try 
        await props.recording.stopAndUnloadAsync();
        await props.setAudioMode( allowsRecordingIOS: false );
      } catch (error) 
        console.log(error); // eslint-disable-line
      

      if (props.recording) 
        const fileUrl = props.recording.getURI();
        props.recording.setOnRecordingStatusUpdate(null);
        props.setState( recording: null, fileUrl );
      }
    },

After this, type the audio name, click the “continue” button, and the audio note will be saved in the Redux store.

onSubmit: props => () => 
      if (props.audioName && props.fileUrl) 
        const audioItem = 
          id: uuid(),
          recordDate: moment().format(),
          title: props.audioName,
          audioUrl: props.fileUrl,
          duration: props.durationMillis,
        ;

        props.addAudio(audioItem);
        props.setState(
          audioName: '',
          isDoneRecording: false,
        );

        props.navigation.navigate(screens.LibraryTab);
      }
    },
(Large preview)

Audio Playback With React Native

You can play the audio on the screen with the saved audio notes. To start the audio playback, click one of the items on the list. Below, you can see the audio player that allows you to track the current position of playback, to set the playback starting point and to toggle the playing audio.

Here’s what my audio playback UI looks like:


Large preview

The Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback.

Let’s write the logic of the audio playback by using Recompose in the screen container src/screens/LibraryScreen/LibraryScreenContainer, as the audio player is available only on this screen.

If you want to display the player at any point of the application, I recommend writing the logic of the player and audio playback in Redux operations using redux-thunk.

Let’s customize the audio mode in the same way we did for the audio recording. First, set allowsRecordingIOS to false.

lifecycle(
    async componentDidMount() 
      await Audio.setAudioModeAsync(
        allowsRecordingIOS: false,
        interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX,
        playsInSilentModeIOS: true,
        shouldDuckAndroid: true,
        interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
      );
    },
  }),

We have created the recording instance for audio recording. As for audio playback, we need to create the sound instance. We can do it in two different ways:

  1. const playbackObject = new Expo.Audio.Sound();
  2. Expo.Audio.Sound.create(source, initialStatus = {}, onPlaybackStatusUpdate = null, downloadFirst = true)

If you use the first method, you will need to call playbackObject.loadAsync(), which loads the media from source into memory and prepares it for playing, after creation of the instance.

The second method is a static convenience method to construct and load a sound. It сreates and loads a sound from source with the optional initialStatus, onPlaybackStatusUpdate and downloadFirst parameters.

The source parameter is the source of the sound. It supports the following forms:

  • a dictionary of the form uri: 'http://path/to/file' with a network URL pointing to an audio file on the web;
  • require('path/to/file') for an audio file asset in the source code directory;
  • an Expo.Asset object for an audio file asset.

The initialStatus parameter is the initial playback status. PlaybackStatus is the structure returned from all playback API calls describing the state of the playbackObject at that point of time. It is a dictionary with the key-value pairs. You can check all of the keys of the PlaybackStatus in the documentation.

onPlaybackStatusUpdate is a function taking a single parameter, PlaybackStatus. It is called at regular intervals while the media is in the loaded state. The interval is 500 milliseconds by default. In my application, I set it to 50 milliseconds interval for a proper UI update.

Before creating the sound instance, you will need to implement the onPlaybackStatusUpdate callback. First, add some props to the screen container:

withClassVariableHandlers(
    playbackInstance: null,
    isSeeking: false,
    shouldPlayAtEndOfSeek: false,
    playingAudio: null,
  , 'setClassVariable'),
  withStateHandlers(
    position: null,
    duration: null,
    shouldPlay: false,
    isLoading: true,
    isPlaying: false,
    isBuffering: false,
    showPlayer: false,
  , 
    setState: () => obj => obj,
  ),

Now, implement onPlaybackStatusUpdate. You will need to make several validations based on PlaybackStatus for a proper UI display:

withHandlers(
    soundCallback: props => (status) => 
      if (status.didJustFinish) 
        props.playbackInstance().stopAsync();
       else if (status.isLoaded) 
        const position = props.isSeeking()
          ? props.position
          : status.positionMillis;
        const isPlaying = (props.isSeeking() );
      }
    },
  }),

After this, you have to implement a handler for the audio playback. If a sound instance is already created, you need to unload the media from memory by calling playbackInstance.unloadAsync() and clear OnPlaybackStatusUpdate:

loadPlaybackInstance: props => async (shouldPlay) => 
      props.setState( isLoading: true );

      if (props.playbackInstance() !== null) 
        await props.playbackInstance().unloadAsync();
        props.playbackInstance().setOnPlaybackStatusUpdate(null);
        props.setClassVariable( playbackInstance: null );
      }
      const  sound  = await Audio.Sound.create(
         uri: props.playingAudio().audioUrl ,
         shouldPlay, position: 0, duration: 1, progressUpdateIntervalMillis: 50 ,
        props.soundCallback,
      );

      props.setClassVariable( playbackInstance: sound );

      props.setState( isLoading: false );
    },

Call the handler loadPlaybackInstance(true) by clicking the item in the list. It will automatically load and play the audio.

Let’s add the pause and play functionality (toggle playing) to the audio player. If audio is already playing, you can pause it with the help of playbackInstance.pauseAsync(). If audio is paused, you can resume playback from the paused point with the help of the playbackInstance.playAsync() method:

onTogglePlaying: props => () => 
      if (props.playbackInstance() !== null) 
        if (props.isPlaying) 
          props.playbackInstance().pauseAsync();
         else 
          props.playbackInstance().playAsync();
        
      }
    },

When you click on the playing item, it should stop. If you want to stop audio playback and put it into the 0 playing position, you can use the method playbackInstance.stopAsync():

onStop: props => () => 
      if (props.playbackInstance() !== null) 
        props.playbackInstance().stopAsync();

        props.setShowPlayer(false);
        props.setClassVariable( playingAudio: null );
      }
    },

The audio player also allows you to rewind the audio with the help of the slider. When you start sliding, the audio playback should be paused with playbackInstance.pauseAsync().

After the sliding is complete, you can set the audio playing position with the help of playbackInstance.setPositionAsync(value), or play back the audio from the set position with playbackInstance.playFromPositionAsync(value):

onCompleteSliding: props => async (value) => 
      if (props.playbackInstance() !== null) 
        if (props.shouldPlayAtEndOfSeek) 
          await props.playbackInstance().playFromPositionAsync(value);
         else 
          await props.playbackInstance().setPositionAsync(value);
        
        props.setClassVariable( isSeeking: false );
      }
    },

After this, you can pass the props to the components MediaList and AudioPlayer (see the file src/screens/LibraryScreen/LibraryScreenView).

Video Recording Functionality With React Native

Let’s proceed to video recording.

We’ll use Expo.Camera for this purpose. Expo.Camera is a React component that renders a preview of the device’s front or back camera. Expo.Camera can also take photos and record videos that are saved to the app’s cache.

To record video, you need permission for access to the camera and microphone. Let’s add the request for camera access as we did with the audio recording (in the file src/index.js):

await Permissions.askAsync(Permissions.CAMERA);

Video recording is available on the “Video Recording” screen. After switching to this screen, the camera will turn on.

You can change the camera type (front or back) and start video recording. During recording, you can see its general duration and can cancel or stop it. When recording is finished, you will have to type the name of the video, after which it will be saved in the Redux store.

Here is what my video recording UI looks like:


Large preview

Let’s write the video recording logic by using Recompose on the container screen

src/screens/RecordVideoScreen/RecordVideoScreenContainer.

You can see the full list of all props in the Expo.Camera component in the document.

In this application, we will use the following props for Expo.Camera.

  • type: The camera type is set (front or back).
  • onCameraReady: This callback is invoked when the camera preview is set. You won’t be able to start recording if the camera is not ready.
  • style: This sets the styles for the camera container. In this case, the size is 4:3.
  • ref: This is used for direct access to the camera component.

Let’s add the variable for saving the type and handler for its changing.

cameraType: Camera.Constants.Type.back,
toggleCameraType: state => () => (
      cameraType: state.cameraType === Camera.Constants.Type.front
        ? Camera.Constants.Type.back
        : Camera.Constants.Type.front,
    ),

Let’s add the variable for saving the camera ready state and callback for onCameraReady.

isCameraReady: false,

setCameraReady: () => () => ( isCameraReady: true ),

Let’s add the variable for saving the camera component reference and setter.

cameraRef: null,

setCameraRef: () => cameraRef => ( cameraRef ),

Let’s pass these variables and handlers to the camera component.

<Camera
          type=cameraType
          onCameraReady=setCameraReady
          style=s.camera
          ref=setCameraRef
        />

Now, when calling toggleCameraType after clicking the button, the camera will switch from the front to the back.

Currently, we have access to the camera component via the reference, and we can start video recording with the help of cameraRef.recordAsync().

The method recordAsync starts recording a video to be saved to the cache directory.

Arguments:

Options (object) — a map of options:

  • quality (VideoQuality): Specify the quality of recorded video. Usage: Camera.Constants.VideoQuality[‘‘], possible values: for 16:9 resolution 2160p, 1080p, 720p, 480p (Android only) and for 4:3 (the size is 640×480). If the chosen quality is not available for the device, choose the highest one.
  • maxDuration (number): Maximum video duration in seconds.
  • maxFileSize (number): Maximum video file size in bytes.
  • mute (boolean): If present, video will be recorded with no sound.

recordAsync returns a promise that resolves to an object containing the video file’s URI property. You will need to save the file’s URI in order to play back the video hereafter. The promise is returned if stopRecording was invoked, one of maxDuration and maxFileSize is reached or the camera preview is stopped.

Because the ratio set for the camera component sides is 4:3, let’s set the same format for the video quality.

Here is what the handler for starting video recording looks like (see the full code of the container in the repository):

onStartRecording: props => async () => 
      if (props.isCameraReady) 
        props.setState( isRecording: true, fileUrl: null );
        props.setVideoDuration();
        props.cameraRef.recordAsync( quality: '4:3' )
          .then((file) => 
            props.setState( fileUrl: file.uri );
          });
      }
    },

During the video recording, we can’t receive the recording status as we have done for audio. That’s why I have created a function to set video duration.

To stop the video recording, we have to call the following function:

stopRecording: props => () => 
      if (props.isRecording) 
        props.cameraRef.stopRecording();
        props.setState( isRecording: false );
        clearInterval(props.interval);
      }
    },

Check out the entire process of video recording.

Video Playback Functionality With React Native

You can play back the video on the “Library” screen. Video notes are located in the “Video” tab.

To start the video playback, click the selected item in the list. Then, switch to the playback screen, where you can watch or delete the video.

The UI for video playback looks like this:


Large preview

To play back the video, use Expo.Video, a component that displays a video inline with the other React Native UI elements in your app.

The video will be displayed on the separate screen, PlayVideo.

You can check out all of the props for Expo.Video here.

In our application, the Expo.Video component uses native playback controls and looks like this:

<Video
        source= uri: videoUrl }
        style=s.video
        shouldPlay=isPlaying
        resizeMode="contain"
        useNativeControls=isPlaying
        onLoad=onLoad
        onError=onError
      />
  • source

    This is the source of the video data to display. The same forms as for Expo.Audio.Sound are supported.

  • resizeMode

    This is a string describing how the video should be scaled for display in the component view’s bounds. It can be “stretch”, “contain” or “cover”.

  • shouldPlay

    This boolean describes whether the media is supposed to play.

  • useNativeControls

    This boolean, if set to true, displays native playback controls (such as play and pause) within the video component.

  • onLoad

    This function is called once the video has been loaded.

  • onError

    This function is called if loading or playback has encountered a fatal error. The function passes a single error message string as a parameter.

When the video is uploaded, the play button should be rendered on top of it.

When you click the play button, the video turns on and the native playback controls are displayed.

Let’s write the logic of the video using Recompose in the screen container src/screens/PlayVideoScreen/PlayVideoScreenContainer:

const defaultState = 
  isError: false,
  isLoading: false,
  isPlaying: false,
;

const enhance = compose(
  paramsToProps('videoUrl'),
  withStateHandlers(
    ...defaultState,
    isLoading: true,
  , 
    onError: () => () => ( ...defaultState, isError: true ),
    onLoad: () => () => defaultState,
   onTogglePlaying: ( isPlaying ) => () => ( ...defaultState, isPlaying: !isPlaying ),
  }),
);

As previously mentioned, the Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback. That’s why you can create custom controls and use more advanced functionality with the Playback API.

Check out the video playback process:

See the full code for the application in the repository.

You can also install the app on your phone by using Expo and check out how it works in practice.

Wrapping Up

I hope you have enjoyed this article and have enriched your knowledge of React Native. You can use this audio and video recording tutorial to create your own custom-designed media player. You can also scale the functionality and add the ability to save media in the phone’s memory or on a server, synchronize media data between different devices, and share media with others.

As you can see, there is a wide scope for imagination. If you have any questions about the process of developing an audio or video recording app with React Native, feel free to drop a comment below.

Smashing Editorial
(da, lf, ra, yk, al, il)


Original article: 

How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial

Analyzing Your Company’s Social Media Presence With IBM Watson And Node.js




Analyzing Your Company’s Social Media Presence With IBM Watson And Node.js

If you are unfamiliar with Machine Learning (ML) technology, it has existed in science fiction for many years and is finally reaching its maturity in our society. One of the first ML examples I saw as a kid was in Star Trek’s The Next Generation when Lieutenant Tasha Yar trains with her holographic opponent that learns how to fight and better defeat in future battles.

In today’s society, China has developed a “lane robot” that is a guard rail controlled by a computer system that can direct the flow of traffic into different lanes, increasing safety and improving traveling time. This is done automatically based on time of day and how much traffic is flowing in each direction.

Another example is Pittsburg unveiling AI traffic signals that automatically detect traffic patterns and alter the traffic lights on-the-fly. Each light is controlled independently to help reduce both the commuting time and the idling time of cars. According to the article, pilot tests have demonstrated a reduced travel time of 25% and idling time by over 40%. There are, of course, hundreds of other examples of ML technology that make intelligent decisions based on the content it consumes.

To accomplish today’s goal, I am going to demonstrate (using Node.js) how to perform a search with Twitter’s API to retrieve content that will be inputted into the ML algorithm to be analyzed. This way, you’ll be provided with characteristics about the users who wrote that specific content so that you can get a better understanding of your audience. The example application will be written using Node.js as the server.

It is beyond the scope of this article to demonstrate how to write an ML algorithm. Instead, to aid in the analysis, I will demonstrate how to use IBM’s Watson to help you understand the general personality of your social media audience.

What Is IBM Watson?

In 2011, Watson began as a computer system that attempted to index the (entire) Internet. It was originally programmed to answer questions posed in ordinary English. Watson competed and won on the TV show Jeopardy! claiming a $1,000,000 cash prize.

Watson was now a proven success.

With the fame of winning on Jeopardy!, IBM has continued to push Watson’s capabilities. Watson has evolved into an enterprise-level application that is focused on Artificial Intelligence (AI) which you can train to identify what you care about most allowing you to make smarter decisions automatically.

The suite of Watson’s services is divided into six high-level categories:

  1. Conversation
    The services in this category allow you to build intelligent chatbot’s or a virtual customer service agent.
  2. Knowledge
    This category is focused on teaching Watson how to interpret data to unlock hidden value and monitor trends.
  3. Vision
    This service provides the ability to tag content inside an image that is used to train Watson to be able to automatically recognize the same pattern inside of other images.
  4. Speech
    These services provide the ability to convert speech to text and the inverse, text to speech.
  5. Language
    This category is split between translating one language to another as well as interpreting the text to predict what predefined category the text belongs to.
  6. Empathy
    This category is devoted to understanding the content’s tone, personality, and emotional state. Inside this category is a service called “Personality Insights” that will be used in this article to predict the personality characteristics with the social media content we will provide it.

This article will be focusing on understanding the personality of the content that we will fetch from Twitter. However, as you can see, Watson provides many other AI features that you can explore to automate many other processes simply through training and content aggregation.

Personality Insights

Personality Insights will analyze content and help you understand the habits and preferences at an individual level and at scale. This is called the ‘personality profile.’ The profile is split into two high-level groups: Personality characteristics and Consumption preferences. These groups are further broken down into more finite components.

Note: To help understand the high-level concepts (before we deep dive into the results), the Personality Insights documentation provides this helpful summary describing how the profile is inferred from the content you provide it.


IBM Watson’s Big Five Personality Traits


Big Five Personality Traits. Image courtesy: IBM.com. (Large preview)

Personality Characteristics

The Personality Insights service infers personality characteristics based on three primary models:

  • The ‘Big Five’ personality characteristics represent the most widely used model for generally describing how a person engages with the world. The model includes five primary dimensions:
    • Agreeableness
    • Conscientiousness
    • Extraversion
    • Emotional range
    • Openness
      Note: Each dimension has six facets that further characterize an individual according to the dimension.
  • Needs describe which aspects of a product will resonate with a person. The model includes twelve characteristic needs:
    • Excitement
    • Harmony
    • Curiosity
    • Ideal
    • Closeness
    • Self-expression
    • Liberty
    • Love
    • Practicality
    • Stability
    • Challenge
    • Structure
  • Values describe motivating factors that influence a person’s decision making. The model includes five values:
    • Self-transcendence / Helping others
    • Conservation / Tradition
    • Hedonism / Taking pleasure in life
    • Self-enhancement / Achieving success
    • Open to change / Excitement

For more information, see Personality models.

Consumption preferences

Based on the personality characteristics inferred from the input text, the service can also return an indication of the author’s consumption preferences. ‘Consumption preferences’ indicate the author’s likelihood to pursue different products, services, and activities. The service groups the individual preferences into eight categories:

  • Shopping
  • Music
  • Movies
  • Reading and learning
  • Health and activity
  • Volunteering
  • Environmental concern
  • Entrepreneurship

Each category contains from one to as many as a dozen individual preferences.

Note: For more information, see Consumption preferences. For a more in-depth overview of a particular point of interest, I suggest you refer to the Personality Insights documentation.

To be effective, Watson requires a minimum of a hundred words to provide an insight into the consumer’s personality. The more words provided, the better Watson can analyze and determine the consumer’s preference.

This means, if you wish to target individuals, you will need to collect more data than one or two tweets from a specific person. However, if a user writes a product review, blog post, email, or anything else related to your company, this could be analyzed on both an individual level and at scale.

To begin, let’s start by setting up the Personality Insights service to begin analyzing a real-world example.

Configuring The Personality Insights Service

Watson is an enterprise application but they offer a free, limited service. Once you’ve created an account and are logged in, you will need to add the Personality Insight service. IBM offers a Lite plan that is free. The Lite plan is limited to 1,000 API calls per month and is automatically deleted after 30 days — perfect for our demonstration.


Create the Personality Insights Service


Create the Personality Insights Service. (Large preview)

Once the service has been added, we will need to retrieve the service’s credentials to perform API calls against it. From Watson’s Dashboard, your service should be displayed. After you’ve selected the service, you’ll find a link to view the Service credentials in the left-hand menu. You will need to create a new ‘Credential.’ A unique name is required and optional configuration parameters can be defaulted for this login. For now, we will leave the configuration options empty.

After you have created a credential, select the ‘View’ credentials link. This will display the API’s URL, your username, and password required to securely execute API calls. Save these somewhere safe as we will need them in the next step.

Testing The Personality Insights Service

To perform API calls, I am going to use Node.js. If you already have Node.js installed, you can move on to the next step; otherwise, follow the instructions to setup Node.js from the official download page.

To demonstrate how to use the Personality Insights, I am going to create a new Node.js project on my computer. With a command prompt open, navigate to the directory where your Node.js projects will be stored and create your new project:

mkdir watson-sentiments
cd watson-sentiments
npm init

To assist with making the API calls to Watson, I am going to leverage the NPM Package: Watson Developer Cloud Node.js SDK. This package can be installed via the command prompt:

npm install watson-developer-cloud --save

Before making the first call, the PersonalityInsightsV3 object needs to be instantiated with the credentials from the previous section. Begin by creating a new file called index.js that will contain the Node.js code.

Here is an example of configuring the class so it is ready to make API calls:

var PersonalityInsightsV3 = require(’watson-developer-cloud/personality-insights/v3’);
var personality_insights = new PersonalityInsightsV3(
  "url": "https://gateway.watsonplatform.net/personality-insights/api",
  "username": "**************************",
  "password": "*************",
  "version_date": "2017-12-01"
);

The personality_insights variable is what we will use to interact with the API for the Personality Insights service. Let’s review how to execute a call and return a personality profile:

var fs = require(’fs’);

personality_insights.profile(
"contentItems": [
   
         "content": "Some content that contains more than 100 words...",
         "contenttype": "text/plain",
         "created": 1447639154000,
         "id": "666073008692314113",
         "language": "en"
      
   ],
   "consumption_preferences": true
}, (err, response) => 
if (err) throw err;

fs.writeFile("results.txt", JSON.stringify(response, null, 2), function(err) 
if (err) throw err;

console.log("Results were saved!");
);
  });

The profile function accepts an array of contentItems. The ‘content’ item contains the actual content with a few additional properties identifying additional information to help Watson interpret it.

When this is executed, the results are written to a text file (the results are too large to write in the console). The result is an object that contains the following high-level properties:

  • word_count
  • The count of words interpreted
  • processed_language

The language that the content provided, e.g. (en).

  • Personality
    This is an array of the ‘Big Five’ personality characteristics (Openness, Conscientiousness, Extraversion, Agreeableness, and Emotional range). Each characteristic contains an overall percentile for that characteristic (e.g. 0.8100175318417588). To ascertain more detail, there is an array called children that provides more in-depth insight. For example, a child category under ‘Openness’ is ‘Adventurousness’ that contains its own percentile.
  • Needs
    This is an array of the twelve characteristics that define the aspects a person will resonate with a product (Excitement, Harmony, Curiosity, Ideal, Closeness, Self-expression, Liberty, Love, Practicality, Stability, Challenge, and Structure). Each characteristic contains a percentile of how the content was interpreted.
  • Values
    This is an array of the five characteristics that describe motivating factors that influence a person’s decision making (Self-transcendence / Helping others, Conservation / Tradition, Hedonism / Taking pleasure in life, Self-enhancement / Achieving success, and Open to change / Excitement). Each characteristic contains a percentile of how the content was interpreted.
  • Behavior
    This is an array that contains thirty-one elements. Each element provides a percentile of when the content was created. Seven of the elements define the days of the week (Sunday through Saturday). The remaining twenty-four elements define the hours of the day. This helps you understand when customer’s interact with your product.
  • consumption_preferences
    This is an array that contains eight different categories with as much as a twelve child categories providing a percentile of likelihood to pursue different products, services, and activities (Shopping, Music, Movies, Reading and learning, Health and activity, Volunteering, Environmental concern, and Entrepreneurship).
  • Warnings
    This is an array that provides messages if a problem was encountered interpreting the content provided.

Here is a CodePen of the formatted results:

See the Pen Example Watson Results by Jamie Munro (@endyourif) on CodePen.

Configuring Twitter

To search Twitter for relevant tweets, I am going to use the Twitter NPM package. From a console window where the application is hosted, run the following command to install:

npm install twitter --save

Before we can implement the Twitter package, you need to create a Twitter application.


Retrieving Twitter’s Access Tokens


Retrieving Twitter’s Access Tokens. (Large preview)

Once you’ve created your application, you need to retrieve the authorization keys required to perform API calls. With your application created, navigate to the ‘Keys’ and ‘Access Tokens’ page. Since we are not performing API calls against users of Twitter, OAuth integration is not required. Instead, we need only the four following keys:

  1. Consumer Key
  2. Consumer Secret
  3. Access Token
  4. Access Token Secret

The last two keys need to be generated near the bottom of the ‘Keys’ and ‘Access Tokens’ page. With the keys, here is an example of searching for Tweets about #SmashingMagazine:

var Twitter = require(’twitter’);

var client = new Twitter(
  consumer_key: ’*********************’,
  consumer_secret: ’******************’,
  access_token_key: ’******************’,
  access_token_secret: ’****************’
);

client.get(’search/tweets’,  q: ’#SmashingMagazine’ , function(error, tweets, response) 
if(error) throw error;

console.log(tweets);
);

The result of this code will log a list tweets about Smashing Magazine. For the purposes of this demonstration, the following fields are of interest to us:

  1. id
  2. created_at
  3. text
  4. metadata.iso_language_code

These are the fields we will feed Watson.

Integrating Personality Insights With Twitter

With Twitter setup and Watson setup, it’s time to integrate the two together and see the results. To make it interesting, let’s search for #DonaldTrump to see what the world thinks about the President of the United States. Here is the code example to search Twitter, feed the results into Watson, and write the results to a text file:

var fs = require(’fs’);
var Twitter = require(’twitter’);

var client = new Twitter(
  consumer_key: ’*********************’,
  consumer_secret: ’******************’,
  access_token_key: ’******************’,
  access_token_secret: ’****************’
);

var PersonalityInsightsV3 = require(’watson-developer-cloud/personality-insights/v3’);
var personality_insights = new PersonalityInsightsV3(
  "url": "https://gateway.watsonplatform.net/personality-insights/api",
  "username": "**************************",
  "password": "*************",
  "version_date": "2017-12-01"
);

client.get(’search/tweets’,  q: ’#DonaldTrump’ , function(error, tweets, response) 
if(error) throw error;

var contentItems = [];

// Loop through the tweets
for (var i = 0; i < tweets.statuses.length; i++) 
var tweet = tweets.statuses[i];

contentItems.push(
"content": tweet.text,
"contenttype": "text/plain",
"created": new Date(tweet.created_at).getTime(),
"id": tweet.id,
"language": tweet.metadata.iso_language_code
);
}

// Call Watson with the tweets
personality_insights.profile(
"contentItems": contentItems,
"consumption_preferences": true
, (err, response) => 
if (err) throw err;

// Write the results to a file
fs.writeFile("results.txt", JSON.stringify(response, null, 2), function(err) 
if (err) throw err;

console.log("Results were saved!");
);
});
});

Here is another CodePen of the formatted results that I received:

See the Pen Donald Trump Watson Results by Jamie Munro (@endyourif) on CodePen.

What Do The Results Say?

Once we’ve analyzed the ‘Openness’ trait of the ‘Big Five,’ we can infer the following:

  • Emotion is quite low at 13%
  • Imagination is average at 54%
  • Intellect is very high at 96%
  • Authority challenging is also quite high at 87%

The ‘Conscientiousness’ trait at a high-level is average at 46% compared with the ‘Openness’ high-level average of 88%. Whereas ‘Agreeableness’ is very low at only 25%. I guess people on Twitter don’t like to agree with Donald Trump.

Moving on to the ‘Needs.’ The sub-categories of ‘Curiosity’ and ‘Structure’ are in the 60 percentile compared to other categories being below the 10th percentile (Excitement, Harmony, etc.).

And finally, under ‘Values,’ the sub-category that stands out to me as interesting is the ‘Openness’ to ‘Change’ at an abysmal 6%.

Based on when you perform your search, your results may vary as the results are limited to the past seven days from executing the example.

From these results, I would determine that the average person who tweets about Donald Trump is quite intellectual, challenges authority, and is not open to change.

With these results, it would allow you to automatically alter how you would target your content towards your audience to match the results received. You will need to determine what categories are of interest and what percentiles do you wish to target. With this ammunition, you can begin automating.

What Else Can I Do With Watson?

As I mentioned at the beginning of this article, Watson offers many other different services. With these services, you could automate many different parts of common business processes. For example:

  • Building a chat bot that can intelligently answer questions based on a knowledge base of information;
  • Build an application where you dictate what you want written to Watson by using the speech to text functionality;
  • Automatically translate your content into different languages to create a multi-lingual site or knowledge base;
  • Teach Watson how to look for specific patterns in images. This could be used to determine if a logo is embedded into a photo.

This, of course, is a very small subset that my limited imagination can postulate. I’m sure you can think of many other ways to leverage Watson’s immense capabilities.

If you are looking for more examples, IBM has an entire GitHub repository dedicated to their Node.js SDK. The example folder contains over ten sample applications that convert speech to text, text to speech, tone analysis, and visual recognition to name just a few.

Conclusion

Before Watson can runaway with technological growth, resulting in the singularity where Artificial Intelligence destroys mankind, this article demonstrated how you can turn social media content into a powerful understanding of how the people creating the content think. Using the results from Watson, your application can use the categories of interest where the percentile exceeds or is less than a predetermined amount to change how you target your audience.

If you have other interesting uses of Watson or how you are using the Personality Insights, be sure to leave a comment below.

Smashing Editorial
(rb, ra, yk, il)


See more here – 

Analyzing Your Company’s Social Media Presence With IBM Watson And Node.js

Lazy Loading JavaScript Modules With ConditionerJS

Linking JavaScript functionality to the DOM can be a repetitive and tedious task. You add a class to an element, find all the elements on the page, and attach the matching JavaScript functionality to the element. Conditioner is here to not only take this work of your hands but supercharge it as well!

In this article, we’ll look at the JavaScript initialization logic that is often used to link UI components to a webpage. Step-by-step we’ll improve this logic, and finally, we’ll make a 1 Kilobyte jump to replacing it with Conditioner. Then we’ll explore some practical examples and code snippets and see how Conditioner can help make our websites more flexible and user-oriented.

Conditioner And Progressive Enhancement Sitting In A Tree

Before we proceed, I need to get one thing across:

Conditioner is not a framework for building web apps.

Instead, it’s aimed at websites. The distinction between websites and web apps is useful for the continuation of this story. Let me explain how I view the overall difference between the two.

Websites are mostly created from a content viewpoint; they are there to present content to the user. The HTML is written to semantically describe the content. CSS is added to nicely present the content across multiple viewports. The last and third act is to carefully layer JavaScript on top to add that extra zing to the user experience. Think of a date picker, navigation, scroll animations, or carousels (pardon my French).

Examples of content-oriented websites are for instance: Wikipedia, Smashing Magazine, your local municipality website, newspapers, and webshops. Web apps are often found in the utility area, think of web-based email clients and online maps. While also presenting content, the focus of web apps is often more on interacting with content than presenting content. There’s a huge grey area between the two, but this contrast will help us decide when Conditioner might be effective and when we should steer clear.

As stated earlier, Conditioner is all about websites, and it’s specifically built to deal with that third act:

Enhancing the presentation layer with JavaScript functionality to offer an improved user experience.

The Troublesome Third Act

The third act is about enhancing the user experience with that zingy JavaScript layer.

Judging from experience and what I’ve seen online, JavaScript functionality is often added to websites like this:

  1. A class is added to an HTML element.
  2. The querySelectorAll method is used to get all elements assigned the class.
  3. A for-loop traverses the NodeList returned in step 2.
  4. A JavaScript function is called for each item in the list.

Let’s quickly put this workflow in code by adding autocomplete functionality to an input field. We’ll create a file called autocomplete.js and add it to the page using a <script> tag.

function createAutocomplete(element) 
  // our autocomplete logic
  // ...
<input type="text" class="autocomplete"/>

<script src="autocomplete.js"></script>

<script>
var inputs = document.querySelectorAll('.autocomplete');

for (var i = 0; i < inputs.length; i++) 
  createAutocomplete(inputs[i]);

</script>

Go to demo →

That’s our starting point.

Suppose we’re now told to add another functionality to the page, say a date picker, it’s initialization will most likely follow the same pattern. Now we’ve got two for-loops. Add another functionality, and you’ve got three, and so on and so on. Not the best.

While this works and keeps you off the street, it creates a host of problems. We’ll have to add a loop to our initialization script for each functionality we add. For each loop we add, the initialization script gets linked ever tighter to the document structure of our website. Often the initialization script will be loaded on each page. Meaning all the querySelectorAll calls for all the different functionalities will be run on each and every page whether functionality is defined on the page or not.

For me, this setup never felt quite right. It always started out “okay,” but then it would slowly grow to a long list of repetitive for-loops. Depending on the project it might contain some conditional logic here and there to determine if something loads on a certain viewport or not.

if (window.innerWidth <= 480) 
  // small viewport for-loops here

Eventually, my initialization script would always grow out of control and turn into a giant pile of spaghetti code that I would not wish on anyone.

Something needed to be done.

Soul Searching

I am a huge proponent of carefully separating the three web dev layers HTML, CSS, and JavaScript. HTML shouldn’t have a rigid relationship with JavaScript, so no use of inline onclick attributes. The same goes for CSS, so no inline style attributes. Adding classes to HTML elements and then later searching for them in my beloved for-loops followed that philosophy nicely.

That stack of spaghetti loops though, I wanted to get rid them so badly.

I remember stumbling upon an article about using data attributes instead of classes, and how those could be used to link up JavaScript functionality (I’m not sure it was this article, but it seems to be from right timeframe). I didn’t like it, misunderstood it, and my initial thought was that this was just covering up for onclick, this mixed HTML and JavaScript, no way I was going to be lured to the dark side, I don’t want anything to do with it. Close tab.

Some weeks later I would return to this and found that linking JavaScript functionality using data attributes was still in line with having separate layers for HTML and JavaScript. As it turned out, the author of the article handed me a solution to my ever-growing initialization problem.

We’ll quickly update our script to use data attributes instead of classes.

<input type="text" data-module="autocomplete">

<script src="autocomplete.js"></script>

<script>
var inputs = document.querySelectorAll('[data-module=autocomplete]');

for (var i = 0; i < inputs.length; i++) 
  createAutocomplete(inputs[i]);

</script>

Go to demo →

Done!

But hang on, this is nearly the same setup; we’ve only replaced .autocomplete with [data-module=autocomplete]. How’s that any better? It’s not, you’re right. If we add an additional functionality to the page, we still have to duplicate our for-loop — blast! Don’t be sad though as this is the stepping stone to our killer for-loop.

Watch what happens when we make a couple of adjustments.

<input type="text" data-module="createAutocomplete">

<script src="autocomplete.js"></script>

<script>
var elements = document.querySelectorAll('[data-module]');

for (var i = 0; i < elements.length; i++) 
    var name = elements[i].getAttribute('data-module');
    var factory = window[name];
    factory(elements[i]);

</script>

Go to demo →

Now we can load any functionality with a single for-loop.

  1. Find all elements on the page with a data-module attribute;
  2. Loop over the node list;
  3. Get the name of the module from the data-module attribute;
  4. Store a reference to the JavaScript function in factory;
  5. Call the factory JavaScript function and pass the element.

Since we’ve now made the name of the module dynamic, we no longer have to add any additional initialization loops to our script. This is all we need to link any JavaScript functionality to an HTML element.

This basic setup has some other advantages as well:

  • The init script no longer needs to know what it loads; it just needs to be very good at this one little trick.
  • There’s now a convention for linking functionality to the DOM; this makes it very easy to tell which parts of the HTML will be enhanced with JavaScript.
  • The init script does not search for modules that are not there, i.e. no wasted DOM searches.
  • The init script is done. No more adjustments are needed. When we add functionality to the page, it will automatically be found and will simply work.

Wonderful!

So What About This Thing Called Conditioner?

We finally have our single loop, our one loop to rule all other loops, our king of loops, our hyper-loop. Ehm. Okay. We’ll just have to conclude that our is a loop of high quality and is so flexible that it can be re-used in each project (there’s not really anything project specific about it). That does not immediately make it library-worthy, it’s still quite a basic loop. However, we’ll find that our loop will require some additional trickery to really cover all our use-cases.

Let’s explore.

With the one loop, we are now loading our functionality automatically.

  1. We assign a data-module attribute to an element.
  2. We add a <script> tag to the page referencing our functionality.
  3. The loop matches the right functionality to each element.
  4. Boom!

Let’s take a look at what we need to add to our loop to make it a bit more flexible and re-usable. Because as it is now, while amazing, we’re going to run into trouble.

  • It would be handy if we moved the global functions to isolated modules. This prevents pollution of the global scope. Makes our modules more portable to other projects. And we’ll no longer have to add our <script> tags manually. Fewer things to add to the page, fewer things to maintain.
  • When using our portable modules across multiple projects (and/or pages) we’ll probably encounter a situation where we need to pass configuration options to a module. Think API keys, labels, animation speeds. That’s a bit difficult at the moment as we can’t access the for-loop.
  • With the ever-growing diversity of devices out there we will eventually encounter a situation where we only want to load a module in a certain context. For instance, a menu that needs to be collapsed on small viewports. We don’t want to add if-statements to our loop. It’s beautiful as it is, we will not add if statements to our for-loop. Never.

That’s where Conditioner can help out. It encompasses all above functionality. On top of that, it exposes a plugin API so we can configure and expand Conditioner to exactly fit our project setup.

Let’s make that 1 Kilobyte jump and replace our initialization loop with Conditioner.

Switching To Conditioner

We can get the Conditioner library from the GitHub repository, npm or from unpkg. For the rest of the article, we’ll assume the Conditioner script file has been added to the page.

The fastest way is to add the unpkg version.

<script src="https://unpkg.com/conditioner-core/conditioner-core.js"></script>

With Conditioner added to the page lets take a moment of silence and say farewell to our killer for-loop.

Conditioners default behavior is exactly the same as our now departed for-loop. It’ll search for elements with the data-module attribute and link them to globally scoped JavaScript functions.

We can start this process by calling the conditioner hydrate method.

<input type="text" data-module="createAutocomplete"/>

<script src="autocomplete.js"></script>

<script>
conditioner.hydrate(document.documentElement);
</script>

Go to demo →

Note that we pass the documentElement to the hydrate method. This tells Conditioner to search the subtree of the <html> element for elements with the data-module attribute.

It basically does this:

document.documentElement.querySelectorAll('[data-module]');

Okay, great! We’re set to take it to the next level. Let’s try to replace our globally scoped JavaScript functions with modules. Modules are reusable pieces of JavaScript that expose certain functionality for use in your scripts.

Moving From Global Functions To Modules

In this article, our modules will follow the new ES Module standard, but the examples will also work with modules based on the Universal Module Definition or UMD.

Step one is turning the createAutocomplete function into a module. Let’s create a file called autocomplete.js. We’ll add a single function to this file and make it the default export.

export default function(element) 
  // autocomplete logic
  // ...

It’s the same as our original function, only prepended with export default.

For the other code snippets, we’ll switch from our classic function to arrow functions.

export default element => 
  // autocomplete logic
  // ...

We can now import our autocomplete.js module and use the exported function like this:

import('./autocomplete.js').then(module => 
  // the autocomplete function is located in module.default
);

Note that this only works in browsers that support Dynamic import(). At the time of this writing that would be Chrome 63 and Safari 11.

Okay, so we now know how to create and import modules, our next step is to tell Conditioner to do the same.

We update the data-module attribute to ./autocomplete.js so it matches our module file name and relative path.

Remember: The import() method requires a path relative to the current module. If we don’t prepend the autocomplete.js filename with ./ the browser won’t be able to find the module.

Conditioner is still busy searching for functions on the global scope. Let’s tell it to dynamically load ES Modules instead. We can do this by overriding the moduleImport action.

We also need to tell it where to find the constructor function (module.default) on the imported module. We can point Conditioner in the right direction by overriding the moduleGetConstructor action.

<input type="text" data-module="./autocomplete.js"/>

<script>
conditioner.addPlugin(
  // fetch module with dynamic import
  moduleImport: (name) => import(name),
  
  // get the module constructor
  moduleGetConstructor: (module) => module.default
);

conditioner.hydrate(document.documentElement);
</script>

Go to demo →

Done!

Conditioner will now automatically lazy load ./autocomplete.js, and once received, it will call the module.default function and pass the element as a parameter.

Defining our autocomplete as ./autocomplete.js is very verbose. It’s difficult to read, and when adding multiple modules on the page, it quickly becomes tedious to write and error prone.

This can be remedied by overriding the moduleSetName action. Conditioner views the data-module value as an alias and will only use the value returned by moduleSetName as the actual module name. Let’s automatically add the js extension and relative path prefix to make our lives a bit easier.

<input type="text" data-module="autocomplete"/>
conditioner.addPlugin(
  // converts module aliases to paths
  moduleSetName: (name) => `./$ name .js`
});

Go to demo →

Now we can set data-module to autocomplete instead of ./autocomplete.js, much better.

That’s it! We’re done! We’ve setup Conditioner to load ES Modules. Adding modules to a page is now as easy as creating a module file and adding a data-module attribute.

The plugin architecture makes Conditioner super flexible. Because of this flexibility, it can be modified for use with a wide range of module loaders and bundlers. There’s bootstrap projects available for Webpack, Browserify and RequireJS.

Please note that Conditioner does not handle module bundling. You’ll have to configure your bundler to find the right balance between serving a bundled file containing all modules or a separate file for each module. I usually cherry pick tiny modules and core UI modules (like navigation) and serve them in a bundled file while conditionally loading all scripts further down the page.

Alright, module loading — check! It’s now time to figure out how to pass configuration options to our modules. We can’t access our loop; also we don’t really want to, so we need to figure out how to pass parameters to the constructor functions of our modules.

Passing Configuration Options To Our Modules

I might have bent the truth a little bit. Conditioner has no out-of-the-box solution for passing options to modules. There I said it. To keep Conditioner as tiny as possible I decided to strip it and make it available through the plugin API. We’ll explore some other options of passing variables to modules and then use the plugin API to set up an automatic solution.

The easiest and at the same time most banal way to create options that our modules can access is to define options on the global window scope.

window.autocompleteSource = './api/query';
export default (element) => 
  console.log(window.autocompleteSource);
  // will log './api/query'
  
  // autocomplete logic
  // ...

Don’t do this.

It’s better to simply add additional data attributes.

<input type="text" 
       data-module="autocomplete" 
       data-source="./api/query"/>

These attributes can then be accessed inside our module by accessing the element dataset which returns a DOMStringMap of all data attributes.

export default (element) => 
  console.log(element.dataset.source);
  // will log './api/query'
  
  // autocomplete logic
  // ...

This could result in a bit of repetition as we’ll be accessing element.dataset in each module. If repetition is not your thing, read on, we’ll fix it right away.

We can automate this by extracting the dataset and injecting it as an options parameter when mounting the module. Let’s override the moduleSetConstructorArguments action.

conditioner.addPlugin(

  // the name of the module and the element it's being mounted to
  moduleSetConstructorArguments: (name, element) => ([
    element, 
    element.dataset
  ])
  
);

The moduleSetConstructorArguments action returns an array of parameters which will automatically be passed to the module constructor.

export default (element, options) => 
  console.log(options.source);
  // will log './api/query'
  
  // autocomplete logic
  // ...

We’ve only eliminated the dataset call, i.e. seven characters. Not the biggest improvement, but we’ve opened the door to take this a bit further.

Suppose we have multiple autocomplete modules on the page, and each and every single one of them requires the same API key. It would be handy if that API key was supplied automagically instead of having to add it as a data attribute on each element.

We can improve our developer lives by adding a page level configuration object.

const pageOptions = 
  // the module alias
  autocomplete: 
    key: 'abc123' // api key
  
}

conditioner.addPlugin(

  // the name of the module and the element it's being mounted to
  moduleSetConstructorArguments: (name, element) => ([
    element, 
    // merge the default page options with the options set on the element it self
    Object.assign(, 
      defaultOptions[element.dataset.module], 
      element.dataset
    )
  ])
  
});

Go to demo →

As our pageOptions variable has been defined with const it’ll be block-scoped, which means it won’t pollute the global scope. Nice.

Using Object.assign we merge an empty object with both the pageOptions for this module and the dataset DOMStringMap found on the element. This will result in an options object containing both the source property and the key property. Should one of the autocomplete elements on the page have a data-key attribute, it will override the pageOptions default key for that element.

const ourOptions = Object.assign(
  {}, 
   key: 'abc123' , 
   source: './api/query' 
);

console.log(ourOptions);
// output:   key: 'abc123', source: './api/query' 

That’s some top-notch developer convenience right there.

By having added this tiny plugin, we can automatically pass options to our modules. This makes our modules more flexible and therefore re-usable over multiple projects. We can still choose to opt-out and use dataset or globally scope our configuration variables (no, don’t), whatever fits best.

Our next challenge is the conditional loading of modules. It’s actually the reason why Conditioner is named Conditioner. Welcome to the inner circle!

Conditionally Loading Modules Based On User Context

Back in 2005, desktop computers were all the rage, everyone had one, and everyone browsed the web with it. Screen resolutions ranged from big to bigger. And while users could scale down their browser windows, we looked the other way and basked in the glory of our beautiful fixed-width sites.

I’ve rendered an artist impression of the 2005 viewport:


A rectangular area illustrating a single viewport size of 1024 pixels by 768 pixels



The 2005 viewport in its full glory, 1024 pixels wide, and 768 pixels high. Wonderful.

Today, a little over ten years later, there’s more people browsing the web on mobile than on desktop, resulting in lots of different viewports.

I’ve applied this knowledge to our artist impression below.


Multiple overlapping rectangles illustrating a high amount of different viewport sizes


More viewports than you can shake a stick at.

Holy smokes! That’s a lot of viewports.

Today, someone might visit your site on a small mobile device connected to a crazy fast WiFi hotspot, while another user might access your site using a desktop computer on a slow tethered connection. Yes, I switched up the connection speeds — reality is unpredictable.

And to think we were worried about users resizing their browser window. Hah!

Note that those million viewports are not set in stone. A user might load a website in portrait orientation and then rotate the device, (or, resize the browser window), all without reloading the page. Our websites should be able to handle this and load or unload functionality accordingly.

Someone on a tiny device should not receive the same JavaScript package as someone on a desktop device. That seems hardly fair; it’ll most likely result in a sub-optimal user experience on both the tiny mobile device and the good ol’ desktop device.

With Conditioner in place, let’s configure it as a gatekeeper and have it load modules based on the current user context. The user context contains information about the environment in which the user is interacting with your functionality. Some examples of environment variables influencing context are viewport size, time of day, location, and battery level. The user can also supply you with context hints, for instance, a preference for reduced motion. How a user behaves on your platform will also tell you something about the context she might be in, is this a recurring visit, how long is the current user session?

The better we’re able to measure these environment variables the better we can enhance our interface to be appropriate for the context the user is in.

We’ll need an attribute to describe our modules context requirements so Conditioner can determine the right moment for the module to load and to unload. We’ll call this attribute data-context. It’s pretty straightforward.

Let’s leave our lovely autocomplete module behind and shift focus to a new module. Our new section-toggle module will be used to hide the main navigation behind a toggle button on small viewports.

Since it should be possible for our section-toggle to be unloaded, the default function returns another function. Conditioner will call this function when it unloads the module.

export default (element) => 
  // sectionToggle logic
  // ...

  return () => 
    // sectionToggle unload logic
    // ...
  
}

We don’t need the toggle behavior on big viewports as those have plenty of space for our menu (it’s a tiny menu). We only want to collapse our menu on viewports more narrow than 30em (this translates to 480px).

Let’s setup the HTML.

<nav>
  <h1 data-module="sectionToggle" 
      data-context="@media (max-width:30em)">
      Navigation
  </h1>
  <ul>
    <li><a href="/home">home</a></li>
    <li><a href="/about">about</a></li>
    <li><a href="/contact">contact</a></li>
  </ul>
</nav>

Go to demo →

The data-context attribute will trigger Conditioner to automatically load a context monitor observing the media query (max-width:30em). When the user context matches this media query, it will load the module; when it does not, or no longer does, it will unload the module.

Monitoring happens based on events. This means that after the page has loaded, should the user resize the viewport or rotate the device, the user context is re-evaluated and the module is loaded or unloaded based on the new observations.

You can view monitoring as feature detection. Where feature detection is about an on/off situation, the browser either supports WebGL, or it doesn’t. Context monitoring is a continuous process, the initial state is observed at page load, but monitoring continues after. While the user is navigating the page, the context is monitored, and observations can influence page state in real-time.

This nonstop monitoring is important as it allows us to adapt to context changes immediately (without page reload) and optimizes our JavaScript layer to fit each new user context like a glove.

The media query monitor is the only monitor that is available by default. Adding your own custom monitors is possible using the plugin API. Let’s add a visible monitor which we’ll use to determine if an element is visible to the user (scrolled into view). To do this, we’ll use the brand new IntersectionObserver API.

conditioner.addPlugin(
  // the monitor hook expects a configuration object
  monitor: 
    // the name of our monitor with the '@'
    name: 'visible',

    // the create method will return our monitor API
    create: (context, element) => (

      // current match state
      matches: false,

      // called by conditioner to start listening for changes
      addListener (change) 

        new IntersectionObserver(entries => 

          // update the matches state
          this.matches = entries.pop().isIntersecting == context;

          // inform Conditioner of the state change
          change();

        ).observe(element);

      }
    })
  }
});

We now have a visible monitor at our disposal.

Let’s use this monitor to only load images when they are scrolled in to view.

Our base image HTML will be a link to the image. When JavaScript fails to load the links will still work, and the contents of the link will describe the image. This is progressive enhancement at work.

<a href="cat-nom.jpg" 
   data-module="lazyImage" 
   data-context="@visible">
   A red cat eating a yellow bird
</a>

Go to demo →

The lazyImage module will extract the link text, create an image element, and set the link text to the alt text of the image.

export default (element) => 

  // store original link text
  const text = element.textContent;

  // replace element text with image
  const image = new Image();
  image.src = element.href;
  image.setAttribute('alt', text);
  element.replaceChild(image, element.firstChild);
  
  return () => 
    // restore original element state
    element.innerHTML = text
  
}

When the anchor is scrolled into view, the link text is replaced with an img tag.

Because we’ve returned an unload function the image will be removed when the element scrolls out of view. This is most likely not what we desire.

We can remedy this behavior by adding the was operator. It will tell Conditioner to retain the first matched state.

<a href="cat-nom.jpg" 
   data-module="lazyImage" 
   data-context="was @visible">
   A red cat eating a yellow bird
</a>

There are three other operators at our disposal.

The not operator lets us invert a monitor result. Instead of writing @visible false we can write not @visible which makes for a more natural and relaxed reading experience.

Last but not least, we can use the or and and operators to string monitors together and form complex context requirements. Using and combined with or we can do lazy image loading on small viewports and load all images at once on big viewports.

<a href="cat-nom.jpg" 
   data-module="lazyImage" 
   data-context="was @visible and @media (max-width:30em) or @media (min-width:30em)">
   A red cat eating a yellow bird
</a>

We’ve looked at the @media monitor and have added our custom @visible monitor. There are lots of other contexts to measure and custom monitors to build:

  • Tap into the Geolocation API and monitor the location of the user @location (near: 51.4, 5.4) to maybe load different scripts when a user is near a certain location.
  • Imagine a @time monitor, which would make it possible to enhance a page dynamically based on the time of day @time (after 20:00).
  • Use the Device Light API to determine the light level @lightlevel (max-lumen: 50) at the location of the user. Which, combined with the time, could be used to perfectly tune page colors.

By moving context monitoring outside of our modules, our modules have become even more portable. If we need to add collapsible sections to one of our pages, it’s now easy to re-use our section toggle module, because it’s not aware of the context in which it’s used. It just wants to be in charge of toggling something.

And this is what Conditioner makes possible, it extracts all distractions from the module and allows you to write a module focused on a single task.

Using Conditioner In JavaScript

Conditioner exposes a total of three methods. We’ve already encountered the hydrate and addPlugin methods. Let’s now have a look at the monitor method.

The monitor method lets us manually monitor a context and receive context updates.

const monitor = conditioner.monitor('@media (min-width:30em)');
monitor.onchange = (matches) => 
  // called when a change to the context was observed
;
monitor.start();

This method makes it possible to do context monitoring from JavaScript without requiring the DOM starting point. This makes it easier to combine Conditioner with frameworks like React, Angular or Vue to help with context monitoring.

As a quick example, I’ve built a React <ContextRouter> component that uses Conditioner to monitor user context queries and switch between views. It’s heavily inspired by React Router so might look familiar.

<ContextRouter>
    <Context query="@media (min-width:30em)" 
             component= FancyInfoGraphic />
    <Context>
        // fallback to use on smaller viewports
        <table/>
    </Context>
</ContextRouter>

I hope someone out there is itching to convert this to Angular. As a cat and React person I just can’t get myself to do it.

Conclusion

Replacing our initialization script with the killer for loop created a single entity in charge of loading modules. From that change, automatically followed a set of requirements. We used Conditioner to fulfill these requirements and then wrote custom plugins to extend Conditioner where it didn’t fit our needs.

Not having access to our single for loop, steered us towards writing more re-usable and flexible modules. By switching to dynamic imports we could then lazy load these modules, and later load them conditionally by combining the lazy loading with context monitoring.

With conditional loading, we can quickly determine when to send which module over the connection, and by building advanced context monitors and queries, we can target more specific contexts for enhancement.

By combining all these tiny changes, we can speed up page load time and more closely match our functionality to each different context. This will result in improved user experience and as a bonus improve our developer experience as well.

Smashing Editorial
(rb, ra, hj, il)

Continue reading: 

Lazy Loading JavaScript Modules With ConditionerJS

How To Build A Skin For Your Web App With React And WordPress

So you’ve trained yourself as a web engineer, and now want to build a blazing fast online shop for your customers. The product list should appear in an instant, and searching should waste no more than a split second either. Is that the stuff of daydreams?
Not anymore. Well, at least it’s nothing that can’t be achieved with the combination of WordPress’ REST API and React, a modern JavaScript library.

View the original here: 

How To Build A Skin For Your Web App With React And WordPress

Adding Code-Splitting Capabilities To A WordPress Website Through PoP

Speed is among the top priorities for any website nowadays. One way to make a website load faster is by code-splitting: splitting an application into chunks that can be loaded on demand — loading only the required JavaScript that is needed and nothing else. Websites based on JavaScript frameworks can immediately implement code-splitting through Webpack, the popular JavaScript bundler. For WordPress websites, though, it is not so easy. First, Webpack was not intentionally built to work with WordPress, so setting it up will require quite some workaround; secondly, no tools seem to be available that provide native on-demand asset-loading capabilities for WordPress.

Originally posted here – 

Adding Code-Splitting Capabilities To A WordPress Website Through PoP

Monthly Web Development Update 2/2018: The Grown-Up Web, Branding Details, And Browser Fast Forward

Every profession is a wide field where many people find their very own, custom niches. So are design and web development today. I started building my first website with framesets and HTML4.0, images and a super limited set of CSS, and — oh so fancy — GIFs and inline JavaScript (remember the onclick=”” attribute?) about one and a half decades ago. It took me four days to learn the initial, necessary skills for that.

Original article:

Monthly Web Development Update 2/2018: The Grown-Up Web, Branding Details, And Browser Fast Forward

Now You See Me: How To Defer, Lazy-Load And Act With IntersectionObserver

Once upon a time, there lived a web developer who successfully convinced his customers that sites should not look the same in all browsers, cared about accessibility, and was an early adopter of CSS grids. But deep down in his heart it was performance that was his true passion: He constantly optimized, minified, monitored, and even employed psychological tricks in his projects.
Then, one day, he learned about lazy-loading images and other assets that are not immediately visible to users and are not essential for rendering meaningful content on the screen.

Source – 

Now You See Me: How To Defer, Lazy-Load And Act With IntersectionObserver

Monthly Web Development Update 1/2018: Browser Diversity, Ethical Design, And CSS Alignment

I hope you had a great start into the new year. And while it’s quite an arbitrary date, many of us take the start of the year as an opportunity to try to change something in their lives. I think it’s well worth doing so, and I wish you the best of luck for accomplishing your realistic goals. I for my part want to start working on my mindfulness, on being able to focus, and on pursuing my dream of building an ethically correct, human company with Colloq that provides real value to users and is profitable by its users.

See the original article here:  

Monthly Web Development Update 1/2018: Browser Diversity, Ethical Design, And CSS Alignment

Understanding And Using REST APIs

There’s a high chance you came across the term “REST API” if you’ve thought about getting data from another source on the internet, such as Twitter or Github. But what is a REST API? What can it do for you? How do you use it?
In this article, you’ll learn everything you need to know about REST APIs to be able to read API documentations and use them effectively.

Read this article: 

Understanding And Using REST APIs