Tag Archives: native

Thumbnail

How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial




How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial

Oleh Mryhlod



React Native is a young technology, already gaining popularity among developers. It is a great option for smooth, fast, and efficient mobile app development. High-performance rates for mobile environments, code reuse, and a strong community: These are just some of the benefits React Native provides.

In this guide, I will share some insights about the high-level capabilities of React Native and the products you can develop with it in a short period of time.

We will delve into the step-by-step process of creating a video/audio recording app with React Native and Expo. Expo is an open-source toolchain built around React Native for developing iOS and Android projects with React and JavaScript. It provides a bunch of native APIs maintained by native developers and the open-source community.

After reading this article, you should have all the necessary knowledge to create video/audio recording functionality with React Native.

Let’s get right to it.

Brief Description Of The Application

The application you will learn to develop is called a multimedia notebook. I have implemented part of this functionality in an online job board application for the film industry. The main goal of this mobile app is to connect people who work in the film industry with employers. They can create a profile, add a video or audio introduction, and apply for jobs.

The application consists of three main screens that you can switch between with the help of a tab navigator:

  • the audio recording screen,
  • the video recording screen,
  • a screen with a list of all recorded media and functionality to play back or delete them.

Check out how this app works by opening this link with Expo.

First, download Expo to your mobile phone. There are two options to open the project :

  1. Open the link in the browser, scan the QR code with your mobile phone, and wait for the project to load.
  2. Open the link with your mobile phone and click on “Open project using Expo”.

You can also open the app in the browser. Click on “Open project in the browser”. If you have a paid account on Appetize.io, visit it and enter the code in the field to open the project. If you don’t have an account, click on “Open project” and wait in an account-level queue to open the project.

However, I recommend that you download the Expo app and open this project on your mobile phone to check out all of the features of the video and audio recording app.

You can find the full code for the media recording app in the repository on GitHub.

Dependencies Used For App Development

As mentioned, the media recording app is developed with React Native and Expo.

You can see the full list of dependencies in the repository’s package.json file.

These are the main libraries used:

  • React-navigation, for navigating the application,
  • Redux, for saving the application’s state,
  • React-redux, which are React bindings for Redux,
  • Recompose, for writing the components’ logic,
  • Reselect, for extracting the state fragments from Redux.

Let’s look at the project’s structure:


Large preview

  • src/index.js: root app component imported in the app.js file;
  • src/components: reusable components;
  • src/constants: global constants;
  • src/styles: global styles, colors, fonts sizes and dimensions.
  • src/utils: useful utilities and recompose enhancers;
  • src/screens: screens components;
  • src/store: Redux store;
  • src/navigation: application’s navigator;
  • src/modules: Redux modules divided by entities as modules/audio, modules/video, modules/navigation.

Let’s proceed to the practical part.

Create Audio Recording Functionality With React Native

First, it’s important to сheck the documentation for the Expo Audio API, related to audio recording and playback. You can see all of the code in the repository. I recommend opening the code as you read this article to better understand the process.

When launching the application for the first time, you’ll need the user’s permission for audio recording, which entails access to the microphone. Let’s use Expo.AppLoading and ask permission for recording by using Expo.Permissions (see the src/index.js) during startAsync.

Await Permissions.askAsync(Permissions.AUDIO_RECORDING);

Audio recordings are displayed on a seperate screen whose UI changes depending on the state.

First, you can see the button “Start recording”. After it is clicked, the audio recording begins, and you will find the current audio duration on the screen. After stopping the recording, you will have to type the recording’s name and save the audio to the Redux store.

My audio recording UI looks like this:


Large preview

I can save the audio in the Redux store in the following format:

audioItemsIds: [‘id1’, ‘id2’],
audioItems: 
 ‘id1’: 
    id: string,
    title: string,
    recordDate: date string,
    duration: number,
    audioUrl: string,
 
},

Let’s write the audio logic by using Recompose in the screen’s container src/screens/RecordAudioScreenContainer.

Before you start recording, customize the audio mode with the help of Expo.Audio.set.AudioModeAsync (mode), where mode is the dictionary with the following key-value pairs:

  • playsInSilentModeIOS: A boolean selecting whether your experience’s audio should play in silent mode on iOS. This value defaults to false.
  • allowsRecordingIOS: A boolean selecting whether recording is enabled on iOS. This value defaults to false. Note: When this flag is set to true, playback may be routed to the phone receiver, instead of to the speaker.
  • interruptionModeIOS: An enum selecting how your experience’s audio should interact with the audio from other apps on iOS.
  • shouldDuckAndroid: A boolean selecting whether your experience’s audio should automatically be lowered in volume (“duck”) if audio from another app interrupts your experience. This value defaults to true. If false, audio from other apps will pause your audio.
  • interruptionModeAndroid: An enum selecting how your experience’s audio should interact with the audio from other apps on Android.

Note: You can learn more about the customization of AudioMode in the documentation.

I have used the following values in this app:

interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX, — Our record interrupts audio from other apps on IOS.

playsInSilentModeIOS: true,

shouldDuckAndroid: true,

interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX — Our record interrupts audio from other apps on Android.

allowsRecordingIOS Will change to true before the audio recording and to false after its completion.

To implement this, let’s write the handler setAudioMode with Recompose.

withHandlers(
 setAudioMode: () => async ( allowsRecordingIOS ) => 
   try 
     await Audio.setAudioModeAsync(
       allowsRecordingIOS,
       interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX,
       playsInSilentModeIOS: true,
       shouldDuckAndroid: true,
       interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
     );
   } catch (error) 
     console.log(error) // eslint-disable-line
   
 },
}),

To record the audio, you’ll need to create an instance of the Expo.Audio.Recording class.

const recording = new Audio.Recording();

After creating the recording instance, you will be able to receive the status of the Recording with the help of recordingInstance.getStatusAsync().

The status of the recording is a dictionary with the following key-value pairs:

  • canRecord: a boolean.
  • isRecording: a boolean describing whether the recording is currently recording.
  • isDoneRecording: a boolean.
  • durationMillis: current duration of the recorded audio.

You can also set a function to be called at regular intervals with recordingInstance.setOnRecordingStatusUpdate(onRecordingStatusUpdate).

To update the UI, you will need to call setOnRecordingStatusUpdate and set your own callback.

Let’s add some props and a recording callback to the container.

withStateHandlers(
    recording: null,
    isRecording: false,
    durationMillis: 0,
    isDoneRecording: false,
    fileUrl: null,
    audioName: '',
  , 
    setState: () => obj => obj,
    setAudioName: () => audioName => ( audioName ),
   recordingCallback: () => ( durationMillis, isRecording, isDoneRecording ) =>
      ( durationMillis, isRecording, isDoneRecording ),
  }),

The callback setting for setOnRecordingStatusUpdate is:

recording.setOnRecordingStatusUpdate(props.recordingCallback);

onRecordingStatusUpdate is called every 500 milliseconds by default. To make the UI update valid, set the 200 milliseconds interval with the help of setProgressUpdateInterval:

recording.setProgressUpdateInterval(200);

After creating an instance of this class, call prepareToRecordAsync to record the audio.

recordingInstance.prepareToRecordAsync(options) loads the recorder into memory and prepares it for recording. It must be called before calling startAsync(). This method can be used if the recording instance has never been prepared.

The parameters of this method include such options for the recording as sample rate, bitrate, channels, format, encoder and extension. You can find a list of all recording options in this document.

In this case, let’s use Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY.

After the recording has been prepared, you can start recording by calling the method recordingInstance.startAsync().

Before creating a new recording instance, check whether it has been created before. The handler for beginning the recording looks like this:

onStartRecording: props => async () => 
      try 
        if (props.recording) 
          props.recording.setOnRecordingStatusUpdate(null);
          props.setState( recording: null );
        }

        await props.setAudioMode( allowsRecordingIOS: true );

        const recording = new Audio.Recording();
        recording.setOnRecordingStatusUpdate(props.recordingCallback);
        recording.setProgressUpdateInterval(200);

        props.setState( fileUrl: null );

await recording.prepareToRecordAsync(Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY);
        await recording.startAsync();

        props.setState( recording );
      } catch (error) 
        console.log(error) // eslint-disable-line
      
    },

Now you need to write a handler for the audio recording completion. After clicking the stop button, you have to stop the recording, disable it on iOS, receive and save the local URL of the recording, and set OnRecordingStatusUpdate and the recording instance to null:

onEndRecording: props => async () => 
      try 
        await props.recording.stopAndUnloadAsync();
        await props.setAudioMode( allowsRecordingIOS: false );
      } catch (error) 
        console.log(error); // eslint-disable-line
      

      if (props.recording) 
        const fileUrl = props.recording.getURI();
        props.recording.setOnRecordingStatusUpdate(null);
        props.setState( recording: null, fileUrl );
      }
    },

After this, type the audio name, click the “continue” button, and the audio note will be saved in the Redux store.

onSubmit: props => () => 
      if (props.audioName && props.fileUrl) 
        const audioItem = 
          id: uuid(),
          recordDate: moment().format(),
          title: props.audioName,
          audioUrl: props.fileUrl,
          duration: props.durationMillis,
        ;

        props.addAudio(audioItem);
        props.setState(
          audioName: '',
          isDoneRecording: false,
        );

        props.navigation.navigate(screens.LibraryTab);
      }
    },
(Large preview)

Audio Playback With React Native

You can play the audio on the screen with the saved audio notes. To start the audio playback, click one of the items on the list. Below, you can see the audio player that allows you to track the current position of playback, to set the playback starting point and to toggle the playing audio.

Here’s what my audio playback UI looks like:


Large preview

The Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback.

Let’s write the logic of the audio playback by using Recompose in the screen container src/screens/LibraryScreen/LibraryScreenContainer, as the audio player is available only on this screen.

If you want to display the player at any point of the application, I recommend writing the logic of the player and audio playback in Redux operations using redux-thunk.

Let’s customize the audio mode in the same way we did for the audio recording. First, set allowsRecordingIOS to false.

lifecycle(
    async componentDidMount() 
      await Audio.setAudioModeAsync(
        allowsRecordingIOS: false,
        interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX,
        playsInSilentModeIOS: true,
        shouldDuckAndroid: true,
        interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
      );
    },
  }),

We have created the recording instance for audio recording. As for audio playback, we need to create the sound instance. We can do it in two different ways:

  1. const playbackObject = new Expo.Audio.Sound();
  2. Expo.Audio.Sound.create(source, initialStatus = {}, onPlaybackStatusUpdate = null, downloadFirst = true)

If you use the first method, you will need to call playbackObject.loadAsync(), which loads the media from source into memory and prepares it for playing, after creation of the instance.

The second method is a static convenience method to construct and load a sound. It сreates and loads a sound from source with the optional initialStatus, onPlaybackStatusUpdate and downloadFirst parameters.

The source parameter is the source of the sound. It supports the following forms:

  • a dictionary of the form uri: 'http://path/to/file' with a network URL pointing to an audio file on the web;
  • require('path/to/file') for an audio file asset in the source code directory;
  • an Expo.Asset object for an audio file asset.

The initialStatus parameter is the initial playback status. PlaybackStatus is the structure returned from all playback API calls describing the state of the playbackObject at that point of time. It is a dictionary with the key-value pairs. You can check all of the keys of the PlaybackStatus in the documentation.

onPlaybackStatusUpdate is a function taking a single parameter, PlaybackStatus. It is called at regular intervals while the media is in the loaded state. The interval is 500 milliseconds by default. In my application, I set it to 50 milliseconds interval for a proper UI update.

Before creating the sound instance, you will need to implement the onPlaybackStatusUpdate callback. First, add some props to the screen container:

withClassVariableHandlers(
    playbackInstance: null,
    isSeeking: false,
    shouldPlayAtEndOfSeek: false,
    playingAudio: null,
  , 'setClassVariable'),
  withStateHandlers(
    position: null,
    duration: null,
    shouldPlay: false,
    isLoading: true,
    isPlaying: false,
    isBuffering: false,
    showPlayer: false,
  , 
    setState: () => obj => obj,
  ),

Now, implement onPlaybackStatusUpdate. You will need to make several validations based on PlaybackStatus for a proper UI display:

withHandlers(
    soundCallback: props => (status) => 
      if (status.didJustFinish) 
        props.playbackInstance().stopAsync();
       else if (status.isLoaded) 
        const position = props.isSeeking()
          ? props.position
          : status.positionMillis;
        const isPlaying = (props.isSeeking() );
      }
    },
  }),

After this, you have to implement a handler for the audio playback. If a sound instance is already created, you need to unload the media from memory by calling playbackInstance.unloadAsync() and clear OnPlaybackStatusUpdate:

loadPlaybackInstance: props => async (shouldPlay) => 
      props.setState( isLoading: true );

      if (props.playbackInstance() !== null) 
        await props.playbackInstance().unloadAsync();
        props.playbackInstance().setOnPlaybackStatusUpdate(null);
        props.setClassVariable( playbackInstance: null );
      }
      const  sound  = await Audio.Sound.create(
         uri: props.playingAudio().audioUrl ,
         shouldPlay, position: 0, duration: 1, progressUpdateIntervalMillis: 50 ,
        props.soundCallback,
      );

      props.setClassVariable( playbackInstance: sound );

      props.setState( isLoading: false );
    },

Call the handler loadPlaybackInstance(true) by clicking the item in the list. It will automatically load and play the audio.

Let’s add the pause and play functionality (toggle playing) to the audio player. If audio is already playing, you can pause it with the help of playbackInstance.pauseAsync(). If audio is paused, you can resume playback from the paused point with the help of the playbackInstance.playAsync() method:

onTogglePlaying: props => () => 
      if (props.playbackInstance() !== null) 
        if (props.isPlaying) 
          props.playbackInstance().pauseAsync();
         else 
          props.playbackInstance().playAsync();
        
      }
    },

When you click on the playing item, it should stop. If you want to stop audio playback and put it into the 0 playing position, you can use the method playbackInstance.stopAsync():

onStop: props => () => 
      if (props.playbackInstance() !== null) 
        props.playbackInstance().stopAsync();

        props.setShowPlayer(false);
        props.setClassVariable( playingAudio: null );
      }
    },

The audio player also allows you to rewind the audio with the help of the slider. When you start sliding, the audio playback should be paused with playbackInstance.pauseAsync().

After the sliding is complete, you can set the audio playing position with the help of playbackInstance.setPositionAsync(value), or play back the audio from the set position with playbackInstance.playFromPositionAsync(value):

onCompleteSliding: props => async (value) => 
      if (props.playbackInstance() !== null) 
        if (props.shouldPlayAtEndOfSeek) 
          await props.playbackInstance().playFromPositionAsync(value);
         else 
          await props.playbackInstance().setPositionAsync(value);
        
        props.setClassVariable( isSeeking: false );
      }
    },

After this, you can pass the props to the components MediaList and AudioPlayer (see the file src/screens/LibraryScreen/LibraryScreenView).

Video Recording Functionality With React Native

Let’s proceed to video recording.

We’ll use Expo.Camera for this purpose. Expo.Camera is a React component that renders a preview of the device’s front or back camera. Expo.Camera can also take photos and record videos that are saved to the app’s cache.

To record video, you need permission for access to the camera and microphone. Let’s add the request for camera access as we did with the audio recording (in the file src/index.js):

await Permissions.askAsync(Permissions.CAMERA);

Video recording is available on the “Video Recording” screen. After switching to this screen, the camera will turn on.

You can change the camera type (front or back) and start video recording. During recording, you can see its general duration and can cancel or stop it. When recording is finished, you will have to type the name of the video, after which it will be saved in the Redux store.

Here is what my video recording UI looks like:


Large preview

Let’s write the video recording logic by using Recompose on the container screen

src/screens/RecordVideoScreen/RecordVideoScreenContainer.

You can see the full list of all props in the Expo.Camera component in the document.

In this application, we will use the following props for Expo.Camera.

  • type: The camera type is set (front or back).
  • onCameraReady: This callback is invoked when the camera preview is set. You won’t be able to start recording if the camera is not ready.
  • style: This sets the styles for the camera container. In this case, the size is 4:3.
  • ref: This is used for direct access to the camera component.

Let’s add the variable for saving the type and handler for its changing.

cameraType: Camera.Constants.Type.back,
toggleCameraType: state => () => (
      cameraType: state.cameraType === Camera.Constants.Type.front
        ? Camera.Constants.Type.back
        : Camera.Constants.Type.front,
    ),

Let’s add the variable for saving the camera ready state and callback for onCameraReady.

isCameraReady: false,

setCameraReady: () => () => ( isCameraReady: true ),

Let’s add the variable for saving the camera component reference and setter.

cameraRef: null,

setCameraRef: () => cameraRef => ( cameraRef ),

Let’s pass these variables and handlers to the camera component.

<Camera
          type=cameraType
          onCameraReady=setCameraReady
          style=s.camera
          ref=setCameraRef
        />

Now, when calling toggleCameraType after clicking the button, the camera will switch from the front to the back.

Currently, we have access to the camera component via the reference, and we can start video recording with the help of cameraRef.recordAsync().

The method recordAsync starts recording a video to be saved to the cache directory.

Arguments:

Options (object) — a map of options:

  • quality (VideoQuality): Specify the quality of recorded video. Usage: Camera.Constants.VideoQuality[‘‘], possible values: for 16:9 resolution 2160p, 1080p, 720p, 480p (Android only) and for 4:3 (the size is 640×480). If the chosen quality is not available for the device, choose the highest one.
  • maxDuration (number): Maximum video duration in seconds.
  • maxFileSize (number): Maximum video file size in bytes.
  • mute (boolean): If present, video will be recorded with no sound.

recordAsync returns a promise that resolves to an object containing the video file’s URI property. You will need to save the file’s URI in order to play back the video hereafter. The promise is returned if stopRecording was invoked, one of maxDuration and maxFileSize is reached or the camera preview is stopped.

Because the ratio set for the camera component sides is 4:3, let’s set the same format for the video quality.

Here is what the handler for starting video recording looks like (see the full code of the container in the repository):

onStartRecording: props => async () => 
      if (props.isCameraReady) 
        props.setState( isRecording: true, fileUrl: null );
        props.setVideoDuration();
        props.cameraRef.recordAsync( quality: '4:3' )
          .then((file) => 
            props.setState( fileUrl: file.uri );
          });
      }
    },

During the video recording, we can’t receive the recording status as we have done for audio. That’s why I have created a function to set video duration.

To stop the video recording, we have to call the following function:

stopRecording: props => () => 
      if (props.isRecording) 
        props.cameraRef.stopRecording();
        props.setState( isRecording: false );
        clearInterval(props.interval);
      }
    },

Check out the entire process of video recording.

Video Playback Functionality With React Native

You can play back the video on the “Library” screen. Video notes are located in the “Video” tab.

To start the video playback, click the selected item in the list. Then, switch to the playback screen, where you can watch or delete the video.

The UI for video playback looks like this:


Large preview

To play back the video, use Expo.Video, a component that displays a video inline with the other React Native UI elements in your app.

The video will be displayed on the separate screen, PlayVideo.

You can check out all of the props for Expo.Video here.

In our application, the Expo.Video component uses native playback controls and looks like this:

<Video
        source= uri: videoUrl }
        style=s.video
        shouldPlay=isPlaying
        resizeMode="contain"
        useNativeControls=isPlaying
        onLoad=onLoad
        onError=onError
      />
  • source

    This is the source of the video data to display. The same forms as for Expo.Audio.Sound are supported.

  • resizeMode

    This is a string describing how the video should be scaled for display in the component view’s bounds. It can be “stretch”, “contain” or “cover”.

  • shouldPlay

    This boolean describes whether the media is supposed to play.

  • useNativeControls

    This boolean, if set to true, displays native playback controls (such as play and pause) within the video component.

  • onLoad

    This function is called once the video has been loaded.

  • onError

    This function is called if loading or playback has encountered a fatal error. The function passes a single error message string as a parameter.

When the video is uploaded, the play button should be rendered on top of it.

When you click the play button, the video turns on and the native playback controls are displayed.

Let’s write the logic of the video using Recompose in the screen container src/screens/PlayVideoScreen/PlayVideoScreenContainer:

const defaultState = 
  isError: false,
  isLoading: false,
  isPlaying: false,
;

const enhance = compose(
  paramsToProps('videoUrl'),
  withStateHandlers(
    ...defaultState,
    isLoading: true,
  , 
    onError: () => () => ( ...defaultState, isError: true ),
    onLoad: () => () => defaultState,
   onTogglePlaying: ( isPlaying ) => () => ( ...defaultState, isPlaying: !isPlaying ),
  }),
);

As previously mentioned, the Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback. That’s why you can create custom controls and use more advanced functionality with the Playback API.

Check out the video playback process:

See the full code for the application in the repository.

You can also install the app on your phone by using Expo and check out how it works in practice.

Wrapping Up

I hope you have enjoyed this article and have enriched your knowledge of React Native. You can use this audio and video recording tutorial to create your own custom-designed media player. You can also scale the functionality and add the ability to save media in the phone’s memory or on a server, synchronize media data between different devices, and share media with others.

As you can see, there is a wide scope for imagination. If you have any questions about the process of developing an audio or video recording app with React Native, feel free to drop a comment below.

Smashing Editorial
(da, lf, ra, yk, al, il)


Original article: 

How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial

How BBC Interactive Content Works Across AMP, Apps, And The Web

In the Visual Journalism team at the BBC, we produce exciting visual, engaging and interactive content, ranging from calculators to visualizations new storytelling formats.

Each application is a unique challenge to produce in its own right, but even more so when you consider that we have to deploy most projects in many different languages. Our content has to work not only on the BBC News and Sports websites but on their equivalent apps on iOS and Android, as well as on third-party sites which consume BBC content.

Now consider that there is an increasing array of new platforms such as AMP, Facebook Instant Articles, and Apple News. Each platform has its own limitations and proprietary publishing mechanism. Creating interactive content that works across all of these environments is a real challenge. I’m going to describe how we’ve approached the problem at the BBC.

Example: Canonical vs. AMP

This is all a bit theoretical until you see it in action, so let’s delve straight into an example.

Here is a BBC article containing Visual Journalism content:


Screenshot of BBC News page containing Visual Journalism content


Our Visual Journalism content begins with the Donald Trump illustration, and is inside an iframe

This is the canonical version of the article, i.e., the default version, which you’ll get if you navigate to the article from the homepage.

Now let’s look at the AMP version of the article:


Screenshot of BBC News AMP page containing same content as before, but content is clipped and has a Show More button


This looks like the same content as the normal article, but is pulling in a different iframe designed specifically for AMP

While the canonical and AMP versions look the same, they are actually two different endpoints with different behavior:

  • The canonical version scrolls you to your chosen country when you submit the form.
  • The AMP version doesn’t scroll you, as you cannot scroll the parent page from within an AMP iframe.
  • The AMP version shows a cropped iframe with a ‘Show More’ button, depending on viewport size and scroll position. This is a feature of AMP.

As well as the canonical and AMP versions of this article, this project was also shipped to the News App, which is yet another platform with its own intricacies and limitations. So how do we do support all of these platforms?

Tooling Is Key

We don’t build our content from scratch. We have a Yeoman-based scaffold which uses Node to generate a boilerplate project with a single command.

New projects come with Webpack, SASS, deployment and a componentization structure out of the box. Internationalization is also baked into our projects, using a Handlebars templating system. Tom Maslen writes about this in detail in his post, 13 tips for making responsive web design multi-lingual.

Out of the box, this works pretty well for compiling for one platform but we need to support multiple platforms. Let’s delve into some code.

Embed vs. Standalone

In Visual Journalism, we sometimes output our content inside an iframe so that it can be a self-contained “embed” in an article, unaffected by the global scripting and styling. An example of this is the Donald Trump interactive embedded in the canonical example earlier in this article.

On the other hand, sometimes we output our content as raw HTML. We only do this when we have control over the whole page or if we require really responsive scroll interaction. Let’s call these our “embed” and “standalone” outputs respectively.

Let’s imagine how we might build the “Will a robot take your job?” interactive in both the “embed” and “standalone” formats.


Two screenshots side by side. One shows content embedded in a page; the other shows the same content as a page in its own right.


Contrived example showing an ‘embed’ on the left, versus the content as a ‘standalone’ page on the right

Both versions of the content would share the vast majority of their code, but there would be some crucial differences in the implementation of the JavaScript between the two versions.

For example, look at the ‘Find out my automation risk’ button. When the user hits the submit button, they should be automatically scrolled to their results.

The “standalone” version of the code might look like this:

button.on('click', (e) => 
    window.scrollTo(0, resultsContainer.offsetTop);
);

But if you were building this as “embed” output, you know that your content is inside an iframe, so would need to code it differently:

// inside the iframe
button.on('click', () => 
    window.parent.postMessage( name: 'scroll', offset: resultsContainer.offsetTop , '*');
});

// inside the host page
window.addEventListener('message', (event) => 
    if (event.data.name === 'scroll') 
        window.scrollTo(0, iframe.offsetTop + event.data.offset);
    
});

Also, what if our application needs to go full screen? This is easy enough if you’re in a “standalone” page:

document.body.className += ' fullscreen';
.fullscreen 
    position: fixed;
    top: 0;
    left: 0;
    right: 0;
    bottom: 0;


Screenshot of map embed with 'Tap to Interact' overlay, followed by a screenshot of the map in full screen mode after it has been tapped.


We successfully use full-screen functionality to make the most of our map module on mobile

If we tried to do this from inside an “embed,” this same code would have the content scaling to the width and height of the iframe, rather than the viewport:


Screenshot of map example as before, but full screen mode is buggy. Text from the surrounding article is visible where it shouldn't be.


It can be difficult going full screen from within an iframe

…so in addition to applying the full-screen styling inside the iframe, we have to send a message to the host page to apply styling to the iframe itself:

// iframe
window.parent.postMessage( name: 'window:toggleFullScreen' , '*');

// host page
window.addEventListener('message', function () 
    if (event.data.name === 'window:toggleFullScreen') 
       document.getElementById(iframeUid).className += ' fullscreen';
    
});

This can translate into a lot of spaghetti code when you start supporting multiple platforms:

button.on('click', (e) => 
    if (inStandalonePage()) 
        window.scrollTo(0, resultsContainer.offsetTop);
    
    else 
        window.parent.postMessage( name: 'scroll', offset: resultsContainer.offsetTop , '*');
    }
});

Imagine doing an equivalent of this for every meaningful DOM interaction in your project. Once you’ve finished shuddering, make yourself a relaxing cup of tea, and read on.

Abstraction Is Key

Rather than forcing our developers to handle these conditionals inside their code, we built an abstraction layer between their content and the environment. We call this layer the ‘wrapper.’

Instead of querying the DOM or native browser events directly, we can now proxy our request through the wrapper module.

import wrapper from 'wrapper';
button.on('click', () => 
    wrapper.scrollTo(resultsContainer.offsetTop);
);

Each platform has its own wrapper implementation conforming to a common interface of wrapper methods. The wrapper wraps itself around our content and handles the complexity for us.


UML diagram showing that when our application calls the standalone wrapper scroll method, the wrapper calls the native scroll method in the host page.


Simple ‘scrollTo’ implementation by the standalone wrapper

The standalone wrapper’s implementation of the scrollTo function is very simple, passing our argument directly to window.scrollTo under the hood.

Now let’s look at a separate wrapper implementing the same functionality for the iframe:


UML diagram showing that when our application calls the embed wrapper scroll method, the embed wrapper combines the requested scroll position with the offset of the iframe before triggering the native scroll method in the host page.


Advanced ‘scrollTo’ implementation by the embed wrapper

The “embed” wrapper takes the same argument as in the “standalone” example but manipulates the value so that the iframe offset is taken into account. Without this addition, we would have scrolled our user somewhere completely unintended.

The Wrapper Pattern

Using wrappers results in code that is cleaner, more readable and consistent between projects. It also allows for micro-optimisations over time, as we make incremental improvements to the wrappers to make their methods more performant and accessible. Your project can, therefore, benefit from the experience of many developers.

So, what does a wrapper look like?

Wrapper Structure

Each wrapper essentially comprises three things: a Handlebars template, wrapper JS file, and a SASS file denoting wrapper-specific styling. Additionally, there are build tasks which hook into events exposed by the underlying scaffolding so that each wrapper is responsible for its own pre-compilation and cleanup.

This is a simplified view of the embed wrapper:

embed-wrapper/
    templates/
        wrapper.hbs
    js/
        wrapper.js
    scss/
        wrapper.scss

Our underlying scaffolding exposes your main project template as a Handlebars partial, which is consumed by the wrapper. For example, templates/wrapper.hbs might contain:

<div class="bbc-news-vj-wrapper--embed">
    >your-application}
</div>

scss/wrapper.scss contains wrapper-specific styling that your application code shouldn’t need to define itself. The embed wrapper, for example, replicates a lot of BBC News styling inside the iframe.

Finally, js/wrapper.js contains the iframed implementation of the wrapper API, detailed below. It is shipped separately to the project, rather than compiled in with the application code — we flag wrapper as a global in our Webpack build process. This means that though we deliver our application to multiple platforms, we only compile the code once.

Wrapper API

The wrapper API abstracts a number of key browser interactions. Here are the most important ones:

scrollTo(int)

Scrolls to the given position in the active window. The wrapper will normalise the provided integer before triggering the scroll so that the host page is scrolled to the correct position.

getScrollPosition: int

Returns the user’s current (normalized) scroll position. In the case of the iframe, this means that the scroll position passed to your application is actually negative until the iframe is at the top of the viewport. This is super useful and lets us do things such as animate a component only when it comes into view.

onScroll(callback)

Provides a hook into the scroll event. In the standalone wrapper, this is essentially hooking into the native scroll event. In the embed wrapper, there will be a slight delay in receiving the scroll event since it is passed via postMessage.

viewport: height: int, width: int

A method to retrieve the viewport height and width (since this is implemented very differently when queried from within an iframe).

toggleFullScreen

In standalone mode, we hide the BBC menu and footer from view and set a position: fixed on our content. In the News App, we do nothing at all — the content is already full screen. The complicated one is the iframe, which relies on applying styles both inside and outside the iframe, coordinated via postMessage.

markPageAsLoaded

Tell the wrapper your content has loaded. This is crucial for our content to work in the News App, which will not attempt to display our content to the user until we explicitly tell the app our content is ready. It also removes the loading spinner on the web versions of our content.

List Of Wrappers

In the future, we envisage creating additional wrappers for large platforms such as Facebook Instant Articles and Apple News. We have created six wrappers to date:

Standalone Wrapper

The version of our content that should go in standalone pages. Comes bundled with BBC branding.

Embed Wrapper

The iframed version of our content, which is safe to sit inside articles or to syndicate to non-BBC sites, since we retain control over the content.

AMP Wrapper

This is the endpoint which is pulled in as an amp-iframe into AMP pages.

News App Wrapper

Our content must make calls to a proprietary bbcvisualjournalism:// protocol.

Core Wrapper

Contains only the HTML — none of our project’s CSS or JavaScript.

JSON Wrapper

A JSON representation of our content, for sharing across BBC products.

Wiring Wrappers Up To The Platforms

For our content to appear on the BBC site, we provide journalists with a namespaced path:

/include/[department]/[unique ID], e.g. /include/visual-journalism/123-quiz

The journalist puts this “include path” into the CMS, which saves the article structure into the database. All products and services sit downstream of this publishing mechanism. Each platform is responsible for choosing the flavor of content it wants and requesting that content from a proxy server.

Let’s take that Donald Trump interactive from earlier. Here, the include path in the CMS is:

/include/newsspec/15996-trump-tracker/english/index

The canonical article page knows it wants the “embed” version of the content, so it appends /embed to the include path:

/include/newsspec/15996-trump-tracker/english/index/embed

…before requesting it from the proxy server:

https://news.files.bbci.co.uk/include/newsspec/15996-trump-tracker/english/index/embed

The AMP page, on the other hand, sees the include path and appends /amp:

/include/newsspec/15996-trump-tracker/english/index/amp

The AMP renderer does a little magic to render some AMP HTML which references our content, pulling in the /amp version as an iframe:

<amp-iframe src="https://news.files.bbci.co.uk/include/newsspec/15996-trump-tracker/english/index/amp" width="640" height="360">
    <!-- some other AMP elements here -->
</amp-iframe>

Every supported platform has its own version of the content:

/include/newsspec/15996-trump-tracker/english/index/amp

/include/newsspec/15996-trump-tracker/english/index/core

/include/newsspec/15996-trump-tracker/english/index/envelope

...and so on

This solution can scale to incorporate more platform types as they arise.

Abstraction Is Hard

Building a “write once, deploy anywhere” architecture sounds quite idealistic, and it is. For the wrapper architecture to work, we have to be very strict on working within the abstraction. This means we have to fight the temptation to “do this hacky thing to make it work in [insert platform name here].” We want our content to be completely unaware of the environment it is shipped in — but this is easier said than done.

Features Of The Platform Are Hard To Configure Abstractly

Before our abstraction approach, we had complete control over every aspect of our output, including, for example, the markup of our iframe. If we needed to tweak anything on a per-project basis, such as add a title attribute to the iframe for accessibility reasons, we could just edit the markup.

Now that the wrapper markup exists in isolation from the project, the only way of configuring it would be to expose a hook in the scaffold itself. We can do this relatively easily for cross-platform features, but exposing hooks for specific platforms breaks the abstraction. We don’t really want to expose an ‘iframe title’ configuration option that’s only used by the one wrapper.

We could name the property more generically, e.g. title, and then use this value as the iframe title attribute. However, it starts to become difficult to keep track of what is used where, and we risk abstracting our configuration to the point of no longer understanding it. By and large, we try to keep our config as lean as possible, only setting properties that have global use.

Component Behaviour Can Be Complex

On the web, our sharetools module spits out social network share buttons that are individually clickable and open a pre-populated share message in a new window.


Screenshot of BBC sharetools section containg Twitter and Facebook social media icons.


BBC Visual Journalism sharetools present a list of social share options

In the News App, we don’t want to share through the mobile web. If the user has the relevant application installed (e.g. Twitter), we want to share in the app itself. Ideally, we want to present the user with the native iOS/Android share menu, then let them choose their share option before we open the app for them with a pre-populated share message. We can trigger the native share menu from the app by making a call to the proprietary bbcvisualjournalism:// protocol.


Screenshot of the share menu on Android with options for sharing via Messaging, Bluetooth, Copy to clipboard, and so on.


Native share menu on Android

However, this screen will be triggered whether you tap ‘Twitter’ or ‘Facebook’ in the ‘Share your results’ section, so the user ends up having to make their choice twice; the first time inside our content, and a second time on the native popup.

This is a strange user journey, so we want to remove the individual share icons from the News app and show a generic share button instead. We are able to do this by explicitly checking which wrapper is in use before we render the component.


Screenshot of the news app share button. This is a single button with the following text: 'Share how you did'.


Generic share button used in the News App

Building the wrapper abstraction layer works well for projects as a whole, but when your choice of wrapper affects changes at the component level, it’s very difficult to retain a clean abstraction. In this case, we’ve lost a little abstraction, and we have some messy forking logic in our code. Thankfully, these cases are few and far between.

How Do We Handle Missing Features?

Keeping abstraction is all well and good. Our code tells the wrapper what it wants the platform to do, e.g. “go full screen.” But what if the platform we’re shipping to can’t actually go full-screen?

The wrapper will try its best not to break altogether, but ultimately you need a design which gracefully falls back to a working solution whether or not the method succeeds. We have to design defensively.

Let’s say we have a results section containing some bar charts. We often like to keep the bar chart values at zero until the charts are scrolled into view, at which point we trigger the bars animating to their correct width.


Screenshot of a collection of bar charts comparing the user's area with the national averages. Each bar has its value displayed as text to the right of the bar.


Bar chart showing values relevant to my area

But if we have no mechanism to hook into the scroll position — as is the case in our AMP wrapper — then the bars would forever remain at zero, which is a thoroughly misleading experience.


Same screenshot of bar charts as before, but bars have 0&#37; width and the values of each bar are fixed at 0&#37;. This is incorrect.


How the bar chart could look if scrolling events aren’t forwarded

We are increasingly trying to adopt more of a progressive enhancement approach in our designs. For example, we could provide a button which will be visible for all platforms by default, but which gets hidden if the wrapper supports scrolling. That way, if the scroll fails to trigger the animation, the user can still trigger the animation manually.


Same screenshot of bar charts as the incorrect 0&#37; bar charts, but this time with a subtle grey overlay and a centered button inviting the user to 'View results'.


We could display a fallback button instead, which triggers the animation on click.

Plans For The Future

We hope to develop new wrappers for platforms such as Apple News and Facebook Instant Articles, as well as to offer all new platforms a ‘core’ version of our content out of the box.

We also hope to get better at progressive enhancement; succeeding in this field means developing defensively. You can never assume all platforms now and in the future will support a given interaction, but a well-designed project should be able to get its core message across without falling at the first technical hurdle.

Working within the confines of the wrapper is a bit of a paradigm shift, and feels like a bit of a halfway house in terms of the long-term solution. But until the industry matures onto a cross-platform standard, publishers will be forced to roll out their own solutions, or use tooling such as Distro for platform-to-platform conversion, or else ignore entire sections of their audience altogether.

It’s early days for us, but so far we’ve had great success in using the wrapper pattern to build our content once and deliver it to the myriad of platforms our audiences are now using.

Smashing Editorial
(rb, ra, il)

See the original post:  

How BBC Interactive Content Works Across AMP, Apps, And The Web

Thumbnail

Bringing Together React, D3, And Their Ecosystem

Since its creation in 2011, D3.js has become the de facto standard for building complex data visualizations on the web. React is also quickly maturing as the library of choice for creating component-based user interfaces.
Both React and D3 are two excellent tools designed with goals that sometimes collide. Both take control of user interface elements, and they do so in different ways. How can we make them work together while optimizing for their distinct advantages according to your current project?

Visit link – 

Bringing Together React, D3, And Their Ecosystem

5 Useful Tools for Translating Your Website Content

website translation

The global economy has expanded your potential market in a way that was not possible even ten years ago, leveling the playing field for small and big businesses. However, it does come with some issues. One of them is the language barrier. If your website is in English, you will get your message across to about 27% of the market. Put another way, about 73% of the global market prefers websites with content in their native language. If people don’t understand the content of your website, you cannot hope to make a sale. You need to give your visitors the…

The post 5 Useful Tools for Translating Your Website Content appeared first on The Daily Egg.

More: 

5 Useful Tools for Translating Your Website Content

How We Built An iOS App To Shoot A 3D Video (Case Study)

It wasn’t long after Hollywood released its first 3D films that the movie format quickly gained huge popularity worldwide. Thanks to developments in video-recording technology, any user can now shoot a video on their own. You can make a stereo record of memorable events in your life or create wonderful material for your business.
Our team was also attracted to 3D filming. We thoroughly studied the features of the human visual apparatus and the technical details of stereoscopic photography.

Read More: 

How We Built An iOS App To Shoot A 3D Video (Case Study)

How To Create Native Cross-Platform Apps With Fuse

Fuse is a toolkit for creating apps that run on both iOS and Android devices. It enables you to create apps using UX Markup, an XML-based language. But unlike the components in React Native and NativeScript, Fuse is not only used to describe the UI and layout; you can also use it to add effects and animation.

How To Create Native Cross-Platform Apps With Fuse

Styles are described by adding attributes such as Color and Margin to the various elements. Business logic is written using JavaScript. Later on, we’ll see how all of these components are combined to build a truly native app.

The post How To Create Native Cross-Platform Apps With Fuse appeared first on Smashing Magazine.

Link to article:

How To Create Native Cross-Platform Apps With Fuse

Web Development Reading List #174: The Bricks We Lay, Remynification, And 0-RTT

We’re all designers. Whether we do a layout, a product design or write code to design a product technically doesn’t matter here. What does matter though, is that we always take the context of a project into consideration. Because as someone shaping a project so that it is appealing to the clients and works in the best way possible for the target audience, we have a pretty big responsibility.

Follow this link:

Web Development Reading List #174: The Bricks We Lay, Remynification, And 0-RTT

App Development Showdown: Why You Should Care About Revisiting The Native Vs. Hybrid Debate In 2017

Back in 2007, the world met the iPhone for the very first time. After Apple’s product debut, it took less than six months for work to begin on PhoneGap, which would become one of the first and most adopted frameworks for hybrid mobile app development — that is, for apps written simultaneously for multiple platforms using HTML, CSS and JavaScript, rather than coded in native languages.
When compared with the prospect of learning an entirely new language and development environment in order to program iOS (and soon Android) apps, the appeal of this type of development to the already huge population of web developers in the world was palpable.

See the original post:  

App Development Showdown: Why You Should Care About Revisiting The Native Vs. Hybrid Debate In 2017

How My API-Driven Website Helps Me Travel The World

Recently, I decided to rebuild my personal website, because it was six years old and looked — politely speaking — a little bit “outdated.” The goal was to include some information about myself, a blog area, a list of my recent side projects, and upcoming events.

How My API-Driven Website Helps Me Travel The World

As I do client work from time to time, there was one thing I didn’t want to deal with — databases! Previously, I built WordPress sites for everyone who wanted me to. The programming part was usually fun for me, but the releases, moving of databases to different environments, and actual publishing, were always annoying.

The post How My API-Driven Website Helps Me Travel The World appeared first on Smashing Magazine.

Visit link: 

How My API-Driven Website Helps Me Travel The World

How To Scale React Applications

We recently released version 3 of React Boilerplate, one of the most popular React starter kits, after several months of work. The team spoke with hundreds of developers about how they build and scale their web applications, and I want to share some things we learned along the way.

How To Scale React Applications

We realized early on in the process that we didn’t want it to be “just another boilerplate.” We wanted to give developers who were starting a company or building a product the best foundation to start from and to scale.

The post How To Scale React Applications appeared first on Smashing Magazine.

Read the article:

How To Scale React Applications