Tag Archives: audio

Thumbnail

Visual Studio Live Share Can Do That?




Visual Studio Live Share Can Do That?

Burke Holland



A few months ago, Microsoft released its free Visual Studio (VS) Live Share service. VS Live Share is Google Docs level collaboration for code. Multiple developers can collaborate on the same file at the same time without ever leaving their own editor.

After the release of Live Share, I realized that many of us have resigned ourselves to being isolated in our code and we’re not even aware that there are better ways to work with a service like VS Live Share. This is partly because we are stuck in old habits and partly because we just aren’t aware of what all VS Live Share can do. That last part I can help with!

In this article, we’ll go over the features and best practices for VS Live Share that make developer collaboration as easy as being an “Anonymous Hippo.”


list of anonymous animals in Google Docs


Google Docs has an interesting way of handling anonymous participants (Large preview)

Share Your Code

Live Share comes as an extension for both Visual Studio and Visual Studio Code (VS Code). In this article, we’re going to focus on VS Code.


vs code live share extension readme page


(Large preview)

You can also install it via the VS Live Share Extension Pack, which includes the following extensions, all of which we are going to cover in this article…

  • VS Live Share
  • VS Live Share Audio
  • Slack Chat extension

Once the extension is installed, you will need to log in to the VS Live Share service. You can do that by opening the Command Palette Ctrl/Cmd + Shift + P and select “Sign In With Browser”. If you don’t log in and you try and start a new sharing session, you will be prompted to log in at that time.


vs code command palette showing option to sign in with browser


Use the VS Code Command Palette to start a new Live Share session (Large preview)

There are several ways to kick off a VS Live Share session. You can do it from the Command Palette, you can click that “Share” button in the bottom toolbar, or you can use the VS Live Share explorer view in the Sidebar.


vs code with boxes drawn around the different parts of the UI that can be used to start a live share session


There are a myriad of ways to start a new VS Live Share session (Large preview)

A link is copied to your clipboard. You can then send that link to others, and they can join your Live Share session — provided they are using VS Code as well. Which, aren’t we all?

Now you can collaborate just like you were working on a regular old Word document:

The other person can not only see your code, but they can edit it, save it, execute it and even debug it. For you, they show up as a cursor with a name on it. You show up in their editor the same way.

The VS Live Share Explorer

The VS Live Share explorer shows up as a new icon in the Action Bar — which is that bar of icons on the far right of my screen (the far left of yours for default Action Bar placement). This is a sort of “ground zero” for everything VS Live Share. From here, you can start sessions, end them, share terminals, servers, and see who is connected.


vs live share viewlet


The VS Live Share Explorer is a heads-up view of all things Live Share (Large preview)

It’s a good idea to bind a keyboard shortcut to this VS Live Share Explorer view so that you can quickly toggle between that and your files. You can do this by pressing Ctrl/Cmd + K (or Ctrl/Cmd + S) and then searching for “Show Live Share”. I bound mine to Ctrl/Cmd + L, which doesn’t seem to be bound to anything else. I find this shortcut to be intuitive (L for Live Share) and easy to hit on the keyboard.


the keyboard binding screen in vs code with a binding created for the vs live share viewlet


You can create a binding for the VS Live Share Explorer viewlet (Large preview)

Share Code Read-Only

When you start a new sharing session, you will be notified thusly and asked if you would like to share your workspace read-only. If you select read-only, people will be able to see your code and follow your movements, but they will not be able to interact.


vs code notification prompting user to choose read-only sharing


Sharing sessions are read-write by default, but you can make them read-only (Large preview)

This mode is useful when you are sharing with someone that you don’t necessarily trust — maybe a vendor, partner or an estranged ex.

It’s also particularly useful for instructors. Note that at the time of this writing, VS Live Share is locked to 5 concurrent users. Since you probably are going to want more than that in read-only mode, especially if you’re teaching a group, you can up the limit to 30 by adding the following line to your User Settings file: Ctrl/Cmd + ,.

"liveshare.features": "experimental"

Change The Default Join Behavior

Anyone with the link can join your Live Share session. When they join, you’ll see a pop-up letting you know. Likewise, when they disconnect, you get notified. This is the default behavior for VS Live Share.


vs code notification with the name of the person who has joined the live share session


VS Code will alert you whenever someone joins your session (Large preview)

It’s a good idea to change this so that you have to manually approve someone before they can join your session. This is to protect you in the case where you go to lunch and forget to disconnect your session. Your co-workers can’t log back in, change one letter in your database connection string and then laugh while you spend the next four hours trying to figure out how your life has gone so horribly wrong.

To enable this, add the following line to your User Settings file Ctrl/Cmd + ,.

"liveshare.guestApprovalRequired": true

Now you’ll be prompted when someone wants to join. If you block someone, they are blocked for the duration of the session. If they try to join again, you won’t be notified and they will be unceremoniously rejected by VS Live Share.

Go and enjoy your lunch. Your computer is safe.

Focus Followers

By default, anyone who joins your Live Share session is “following” you. That means that their editor will load up whatever file you are in and scroll whenever you scroll. Even if you switch files, participants will see exactly what you see.

The second that a person makes changes to a file, they are no longer following you. So if you are both working on a file together, and then you go to a different file, they won’t automatically go with you. That can lead to a lot of confusion with you talking about code in the file you’re in while the other person is looking at something entirely different.

Besides just telling each other where you are (which works, btw), there is a handy command called “Focus Participants” that is in the Command Palette Ctrl/Cmd + Shift + P.


vs code command palette showing live share focus command


Access the “focus” command from the VS Code Command Palette (Large preview)

You can also access it as an icon in the VS Live Share Explorer view.


vs code live share explorer focus icon


Send a follow request by clicking the follow icon in the VS Live Share Explorer viewlet (Large preview)

This will focus your participants on the next thing you click on or scroll to. By default, VS Live Share focus requests are accepted implicitly. If you don’t want people to be able to focus you, you can add the following line to your User Settings file.

"liveshare.focusBehavior": "prompt"

Also note that you can follow participants. If you click on their name in the VS Live Share Explorer view, you will begin to follow them.

Because following is turned off as soon as the other person begins editing code, it can be tough to know exactly when people are following you and when they aren’t. One place you can look is in the VS Live Share Explorer view. It will tell you the file that a person is in, but not whether or not they are following you.

A good practice is to just remember that focus is always changing so people may or may not see what you see at any given time.

Debug As A Team

Participants can share any debug sessions that you run. If you start a debug session, they will get the exact same experience that you do. If it breaks on your side, it breaks on theirs, and they get the full debug view into all of your code.

They can step in, out, over, add watches, evaluate in the Debug Console; any debugging that you can do, they can do too, and they can control it.

Debugging can also be launched by participants. Be default, though, VS Code does not allow your debugger to be started remotely. To enable this, add the following line to your User Settings file Ctrl/Cmd + ,:

"liveshare.allowGuestDebugControl": true

Share Your Terminal

A lot of the work we do as developers isn’t in our code; it’s in the terminal. Some days it seems like I spend about as much time on my terminal as I do in my editor. This means that if you have an error on your terminal or need to type some command, it would be nice if your participants in VS Live Share can see your terminal in addition to your code.

VS Code has an integrated terminal, and you can share it with VS Live Share.


vs code command palette with share terminal selected


Access the “Share Terminal” command from the VS Code Command Palette (Large preview)

When you do this, you have the opportunity to share your terminal as read-only, or as read-write.


vs code prompting to share terminal as read-only or read-write


Always share your terminal read-only unless you absolutely have to share it with write access (Large preview)

By default, you should be sharing your terminal as read-only. When you share your terminal read-write, the user can execute arbitrary commands directly on your terminal. Let that sink in for a moment. That’s heavy.

It goes without saying that having remote write access to someone’s terminal comes with a lot of trust and responsibility. You should only ever share your terminal read-write with people that you trust implicitly. Estranged ex’s are probably off the table.

Sharing your terminal read-only safely allows the person on the other end of the line to see what you are typing and your terminal output in real time, but restricts them from typing anything into that terminal.

Should you find yourself in a scenario where it would be quicker for the other person to just get at your terminal instead of trying to walk you through some wacky command with a ton of flags, you can share your terminal read-write. In this mode, the other person has full remote access to your terminal. Choose your friends wisely.

Share Your localhost

In the video above, the terminal command ends with a link to a site running on http://localhost:8080. With VS Live Share, you can share that localhost so that the other person can access it just like it was their own localhost.

If you are running a shared debug session, when the participant hits that localhost URL on their end, it will break for both of you if a breakpoint is hit. Even better, you can share any TCP process. That means that you can share something like a database or a Redis cache. For instance, you could share your local Mongo DB server. Seriously! This means no more changing config files or trying to get a shared database up. Just share the port for your local Mongo DB instance.

Share The Right Files The Right Way

Sometimes you don’t want collaborators to see certain files. There are likely private keys and passwords in your project that are not checked into source control and not suitable for public viewing. In this case, you would want to hide those files from anyone participating in your Live Share session.

By default, VS Live Share will hide any file that is specified in your .gitignore. If there is a file that you want to hide, just add it to your .gitignore. Note though, that this only hides the file in the project view. If you are in a shared debugging session and you step into a file that is in the .gitignore, it is still loaded up in the editor and your collaborators will be able to see it.

You can get more fine-grained control over how you share files by creating a .vsls.json file.

For instance, if you wanted to make sure that any files that are in the .gitignore are never visible, even during debugging, you can set the gitignore property to exclude.


    "$schema": "http://json.schemastore.org/vsls",
    "gitignore":"exclude"

Likewise, you could show everything in your .gitignore and control file visibilty directly from the .vsls.json file. To do that, set the gitignore to none and then use the excludeFiles and hideFiles properties. Remember — exclude means never visible, and hide means “not visible in the file explorer.”


    "$schema": "http://json.schemastore.org/vsls",
    "gitignore":"none",
    "excludeFiles":[
        "*.env"
    ],
    "hideFiles": [
        "dist"
    ]

Sharing And Extensions

Part of the appeal of VS Code to a lot of developers is the massive extensions marketplace. Most people will have more than a few installed. It’s important to understand how extensions will work, or not work, in the context of VS Live Share.

VS Live Share will synchronize anything that is specific to the context of the project you are sharing. For instance, if you have the Vetur extension installed because you are working with a Vue project, it will be shared across to any participants — regardless of whether or not they have it installed as well. The same is true for other context-specific things, like linters, formatters, debuggers, and language services.

VS Live Share does not synchronize extensions that are user specific. These would be things like themes, icons, keyboard bindings, and so on. As a general rule of thumb, VS Live Share shares your context, not your screen. You can consult the official docs article on this subject for a more in-depth explanation of what extensions you can expect to be shared.

Communicate While You Collaborate

One of the first things people do on their inaugural VS Live Share experience is to try to communicate by typing in code comments. This seems like the write (get it?) thing to do, but not really how VS Live Share was designed to be used.

VS Live Share is not meant to replace your chat client of choice. You likely already have a preferred chat mechanism, and VS Live Share assumes that you will continue to use that.

If you’re already using Slack, there is a VS Code extension called Slack Chat. This extension is still a tad early in its development, but it looks quite promising. It puts VS Code in split mode and embeds Slack on the right-hand side. Even better, you can start a Live Share session directly from the Slack chat.


vs code slack chat extension


The Slack Chat extension puts Slack inside of your editor (Large preview)

Another tool that looks quite interesting is called CodeStream.

CodeStream

While VS Live Share looks to improve collaboration from the editor, CodeStream is aiming to solve that same problem from a chat perspective.

The CodeStream extension allows you to chat directly within VS Code and those chats become part of your code history. You can highlight a chunk of code to discuss and it goes directly into the chat so there is context for your comments. These comments are then saved as part of your Git repo. They also show up in your code as little comment icons, and these comments will show up no matter which branch you are on.

When it comes to VS Live Share, CodeStream offers a complimentary set of features. You can start new sessions directly from the chat pane, as well as by clicking on an avatar. New sessions automatically create a corresponding chat channel that you can persist with the code, or dispose of when you are done.

If chatting isn’t enough to get the job done, and you need to collaborate like it’s 1999, help is just a phone call away.

VS Live Share Audio

While VS Live Share isn’t trying to reinvent chat, it does re-invent your telephone. Kind of.

With the VS Live Share Audio extension, you can call someone directly and do voice chat from within VS Code.


vs code command palette showing start audio call option


Make audio calls from VS Code using the VS Live Share Audio extension (Large preview)

The other person will then get a prompt to join your call.


vs code notification asking if you would like to join the audio call


VS Code will ask you if you want to join an audio call that is in process (Large preview)

You will see a speaker icon in the bottom status bar when you are connected to a call. You can click on that speaker to change your audio device, mute yourself, or disconnect from the call.


vs code options showing options like mute and disconnect for live share audio extension


You have full control over audio settings when in a VS Live Share Audio call (Large preview)

The last tip I’ll give you is probably the most important, and it’s not a fancy feature or obscure setting you didn’t know existed.

Change Your Muscle Memory

We’ve got years of learned behavior when it comes to getting help or sharing our code. The state of developer collaboration tools has been so bad for so long that we are conditioned to paste code into Slack, start an awkward Skype calls that consist mostly of “tell me when you can see my screen”, or crowd around a monitor and point excessively, i.e. stock photo style.


a group of people pointing at a computer screen


(Large preview)

The most important thing you can do to get the most out of VS Live Share is to actually use VS Live Share. And it will have to be a “conscious” effort.

Your brain is good at patterns. You are constantly recognizing and classifying the world around you based on patterns you have identified, and you are so good at it, you don’t even realize you are doing it. You then develop default responses to these patterns. You form instincts. This is why you will default to the old ways of collaboration without even thinking about what you are doing. Before you know it you will be on a Skype call with someone sharing your screen — even if you have Live Share installed.

I’ve written a lot about VS Code and people will ask me from time to time how they can get more productive with their editor. I always say the same thing: the next time you reach for the mouse to do something, stop. Can you do that something with the keyboard instead? You probably can. Look up the shortcut and then make yourself use it. At first it’s going to be slower, but if you are willing to deliberately adopt a different behavior, you will be astonished at how fast your brain will default to the more productive way of doing something.

The same goes for Live Share. You will be on a call sharing your screen when it occurs to you that you could be using Live Share. At that moment, stop; click that “Share” button in the bottom of VS Code.

Yes, the person on the other end may not have the extension installed. Yes, it may take a moment to set it up. But if you work on establishing this behavior now, the next time you go to do this, it will “just work” and it won’t be long before you don’t even have to think about it, and at that point, you will finally have achieved that “Anonymous Hippo” level of collaboration.

More Resources

Smashing Editorial
(rb, ra, il)


See the article here:

Visual Studio Live Share Can Do That?

Thumbnail

8 Website Redesign Tips, Examples and Best Practices

website redesign tips

Every website redesign is different. I’ve worked on a ton of them, so I should know. What’s important, though, is that you take a strategic approach to your website redesign. Know what isn’t working, what does currently work, and what goals you wish to achieve. Let’s look at some of my favorite techniques for creating a website redesign strategy and implementing it for maximum ROI. If you’d like to skip around, here are the topics I’ll address: How Do You Know If Your Website Needs a Redesign? How to Start the Website Redesign Process 8 Website Redesign Tips and Best…

The post 8 Website Redesign Tips, Examples and Best Practices appeared first on The Daily Egg.

View original – 

8 Website Redesign Tips, Examples and Best Practices

Thumbnail

How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial




How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial

Oleh Mryhlod



React Native is a young technology, already gaining popularity among developers. It is a great option for smooth, fast, and efficient mobile app development. High-performance rates for mobile environments, code reuse, and a strong community: These are just some of the benefits React Native provides.

In this guide, I will share some insights about the high-level capabilities of React Native and the products you can develop with it in a short period of time.

We will delve into the step-by-step process of creating a video/audio recording app with React Native and Expo. Expo is an open-source toolchain built around React Native for developing iOS and Android projects with React and JavaScript. It provides a bunch of native APIs maintained by native developers and the open-source community.

After reading this article, you should have all the necessary knowledge to create video/audio recording functionality with React Native.

Let’s get right to it.

Brief Description Of The Application

The application you will learn to develop is called a multimedia notebook. I have implemented part of this functionality in an online job board application for the film industry. The main goal of this mobile app is to connect people who work in the film industry with employers. They can create a profile, add a video or audio introduction, and apply for jobs.

The application consists of three main screens that you can switch between with the help of a tab navigator:

  • the audio recording screen,
  • the video recording screen,
  • a screen with a list of all recorded media and functionality to play back or delete them.

Check out how this app works by opening this link with Expo.

First, download Expo to your mobile phone. There are two options to open the project :

  1. Open the link in the browser, scan the QR code with your mobile phone, and wait for the project to load.
  2. Open the link with your mobile phone and click on “Open project using Expo”.

You can also open the app in the browser. Click on “Open project in the browser”. If you have a paid account on Appetize.io, visit it and enter the code in the field to open the project. If you don’t have an account, click on “Open project” and wait in an account-level queue to open the project.

However, I recommend that you download the Expo app and open this project on your mobile phone to check out all of the features of the video and audio recording app.

You can find the full code for the media recording app in the repository on GitHub.

Dependencies Used For App Development

As mentioned, the media recording app is developed with React Native and Expo.

You can see the full list of dependencies in the repository’s package.json file.

These are the main libraries used:

  • React-navigation, for navigating the application,
  • Redux, for saving the application’s state,
  • React-redux, which are React bindings for Redux,
  • Recompose, for writing the components’ logic,
  • Reselect, for extracting the state fragments from Redux.

Let’s look at the project’s structure:


Large preview

  • src/index.js: root app component imported in the app.js file;
  • src/components: reusable components;
  • src/constants: global constants;
  • src/styles: global styles, colors, fonts sizes and dimensions.
  • src/utils: useful utilities and recompose enhancers;
  • src/screens: screens components;
  • src/store: Redux store;
  • src/navigation: application’s navigator;
  • src/modules: Redux modules divided by entities as modules/audio, modules/video, modules/navigation.

Let’s proceed to the practical part.

Create Audio Recording Functionality With React Native

First, it’s important to сheck the documentation for the Expo Audio API, related to audio recording and playback. You can see all of the code in the repository. I recommend opening the code as you read this article to better understand the process.

When launching the application for the first time, you’ll need the user’s permission for audio recording, which entails access to the microphone. Let’s use Expo.AppLoading and ask permission for recording by using Expo.Permissions (see the src/index.js) during startAsync.

Await Permissions.askAsync(Permissions.AUDIO_RECORDING);

Audio recordings are displayed on a seperate screen whose UI changes depending on the state.

First, you can see the button “Start recording”. After it is clicked, the audio recording begins, and you will find the current audio duration on the screen. After stopping the recording, you will have to type the recording’s name and save the audio to the Redux store.

My audio recording UI looks like this:


Large preview

I can save the audio in the Redux store in the following format:

audioItemsIds: [‘id1’, ‘id2’],
audioItems: 
 ‘id1’: 
    id: string,
    title: string,
    recordDate: date string,
    duration: number,
    audioUrl: string,
 
},

Let’s write the audio logic by using Recompose in the screen’s container src/screens/RecordAudioScreenContainer.

Before you start recording, customize the audio mode with the help of Expo.Audio.set.AudioModeAsync (mode), where mode is the dictionary with the following key-value pairs:

  • playsInSilentModeIOS: A boolean selecting whether your experience’s audio should play in silent mode on iOS. This value defaults to false.
  • allowsRecordingIOS: A boolean selecting whether recording is enabled on iOS. This value defaults to false. Note: When this flag is set to true, playback may be routed to the phone receiver, instead of to the speaker.
  • interruptionModeIOS: An enum selecting how your experience’s audio should interact with the audio from other apps on iOS.
  • shouldDuckAndroid: A boolean selecting whether your experience’s audio should automatically be lowered in volume (“duck”) if audio from another app interrupts your experience. This value defaults to true. If false, audio from other apps will pause your audio.
  • interruptionModeAndroid: An enum selecting how your experience’s audio should interact with the audio from other apps on Android.

Note: You can learn more about the customization of AudioMode in the documentation.

I have used the following values in this app:

interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX, — Our record interrupts audio from other apps on IOS.

playsInSilentModeIOS: true,

shouldDuckAndroid: true,

interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX — Our record interrupts audio from other apps on Android.

allowsRecordingIOS Will change to true before the audio recording and to false after its completion.

To implement this, let’s write the handler setAudioMode with Recompose.

withHandlers(
 setAudioMode: () => async ( allowsRecordingIOS ) => 
   try 
     await Audio.setAudioModeAsync(
       allowsRecordingIOS,
       interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX,
       playsInSilentModeIOS: true,
       shouldDuckAndroid: true,
       interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
     );
   } catch (error) 
     console.log(error) // eslint-disable-line
   
 },
}),

To record the audio, you’ll need to create an instance of the Expo.Audio.Recording class.

const recording = new Audio.Recording();

After creating the recording instance, you will be able to receive the status of the Recording with the help of recordingInstance.getStatusAsync().

The status of the recording is a dictionary with the following key-value pairs:

  • canRecord: a boolean.
  • isRecording: a boolean describing whether the recording is currently recording.
  • isDoneRecording: a boolean.
  • durationMillis: current duration of the recorded audio.

You can also set a function to be called at regular intervals with recordingInstance.setOnRecordingStatusUpdate(onRecordingStatusUpdate).

To update the UI, you will need to call setOnRecordingStatusUpdate and set your own callback.

Let’s add some props and a recording callback to the container.

withStateHandlers(
    recording: null,
    isRecording: false,
    durationMillis: 0,
    isDoneRecording: false,
    fileUrl: null,
    audioName: '',
  , 
    setState: () => obj => obj,
    setAudioName: () => audioName => ( audioName ),
   recordingCallback: () => ( durationMillis, isRecording, isDoneRecording ) =>
      ( durationMillis, isRecording, isDoneRecording ),
  }),

The callback setting for setOnRecordingStatusUpdate is:

recording.setOnRecordingStatusUpdate(props.recordingCallback);

onRecordingStatusUpdate is called every 500 milliseconds by default. To make the UI update valid, set the 200 milliseconds interval with the help of setProgressUpdateInterval:

recording.setProgressUpdateInterval(200);

After creating an instance of this class, call prepareToRecordAsync to record the audio.

recordingInstance.prepareToRecordAsync(options) loads the recorder into memory and prepares it for recording. It must be called before calling startAsync(). This method can be used if the recording instance has never been prepared.

The parameters of this method include such options for the recording as sample rate, bitrate, channels, format, encoder and extension. You can find a list of all recording options in this document.

In this case, let’s use Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY.

After the recording has been prepared, you can start recording by calling the method recordingInstance.startAsync().

Before creating a new recording instance, check whether it has been created before. The handler for beginning the recording looks like this:

onStartRecording: props => async () => 
      try 
        if (props.recording) 
          props.recording.setOnRecordingStatusUpdate(null);
          props.setState( recording: null );
        }

        await props.setAudioMode( allowsRecordingIOS: true );

        const recording = new Audio.Recording();
        recording.setOnRecordingStatusUpdate(props.recordingCallback);
        recording.setProgressUpdateInterval(200);

        props.setState( fileUrl: null );

await recording.prepareToRecordAsync(Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY);
        await recording.startAsync();

        props.setState( recording );
      } catch (error) 
        console.log(error) // eslint-disable-line
      
    },

Now you need to write a handler for the audio recording completion. After clicking the stop button, you have to stop the recording, disable it on iOS, receive and save the local URL of the recording, and set OnRecordingStatusUpdate and the recording instance to null:

onEndRecording: props => async () => 
      try 
        await props.recording.stopAndUnloadAsync();
        await props.setAudioMode( allowsRecordingIOS: false );
      } catch (error) 
        console.log(error); // eslint-disable-line
      

      if (props.recording) 
        const fileUrl = props.recording.getURI();
        props.recording.setOnRecordingStatusUpdate(null);
        props.setState( recording: null, fileUrl );
      }
    },

After this, type the audio name, click the “continue” button, and the audio note will be saved in the Redux store.

onSubmit: props => () => 
      if (props.audioName && props.fileUrl) 
        const audioItem = 
          id: uuid(),
          recordDate: moment().format(),
          title: props.audioName,
          audioUrl: props.fileUrl,
          duration: props.durationMillis,
        ;

        props.addAudio(audioItem);
        props.setState(
          audioName: '',
          isDoneRecording: false,
        );

        props.navigation.navigate(screens.LibraryTab);
      }
    },
(Large preview)

Audio Playback With React Native

You can play the audio on the screen with the saved audio notes. To start the audio playback, click one of the items on the list. Below, you can see the audio player that allows you to track the current position of playback, to set the playback starting point and to toggle the playing audio.

Here’s what my audio playback UI looks like:


Large preview

The Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback.

Let’s write the logic of the audio playback by using Recompose in the screen container src/screens/LibraryScreen/LibraryScreenContainer, as the audio player is available only on this screen.

If you want to display the player at any point of the application, I recommend writing the logic of the player and audio playback in Redux operations using redux-thunk.

Let’s customize the audio mode in the same way we did for the audio recording. First, set allowsRecordingIOS to false.

lifecycle(
    async componentDidMount() 
      await Audio.setAudioModeAsync(
        allowsRecordingIOS: false,
        interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX,
        playsInSilentModeIOS: true,
        shouldDuckAndroid: true,
        interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
      );
    },
  }),

We have created the recording instance for audio recording. As for audio playback, we need to create the sound instance. We can do it in two different ways:

  1. const playbackObject = new Expo.Audio.Sound();
  2. Expo.Audio.Sound.create(source, initialStatus = {}, onPlaybackStatusUpdate = null, downloadFirst = true)

If you use the first method, you will need to call playbackObject.loadAsync(), which loads the media from source into memory and prepares it for playing, after creation of the instance.

The second method is a static convenience method to construct and load a sound. It сreates and loads a sound from source with the optional initialStatus, onPlaybackStatusUpdate and downloadFirst parameters.

The source parameter is the source of the sound. It supports the following forms:

  • a dictionary of the form uri: 'http://path/to/file' with a network URL pointing to an audio file on the web;
  • require('path/to/file') for an audio file asset in the source code directory;
  • an Expo.Asset object for an audio file asset.

The initialStatus parameter is the initial playback status. PlaybackStatus is the structure returned from all playback API calls describing the state of the playbackObject at that point of time. It is a dictionary with the key-value pairs. You can check all of the keys of the PlaybackStatus in the documentation.

onPlaybackStatusUpdate is a function taking a single parameter, PlaybackStatus. It is called at regular intervals while the media is in the loaded state. The interval is 500 milliseconds by default. In my application, I set it to 50 milliseconds interval for a proper UI update.

Before creating the sound instance, you will need to implement the onPlaybackStatusUpdate callback. First, add some props to the screen container:

withClassVariableHandlers(
    playbackInstance: null,
    isSeeking: false,
    shouldPlayAtEndOfSeek: false,
    playingAudio: null,
  , 'setClassVariable'),
  withStateHandlers(
    position: null,
    duration: null,
    shouldPlay: false,
    isLoading: true,
    isPlaying: false,
    isBuffering: false,
    showPlayer: false,
  , 
    setState: () => obj => obj,
  ),

Now, implement onPlaybackStatusUpdate. You will need to make several validations based on PlaybackStatus for a proper UI display:

withHandlers(
    soundCallback: props => (status) => 
      if (status.didJustFinish) 
        props.playbackInstance().stopAsync();
       else if (status.isLoaded) 
        const position = props.isSeeking()
          ? props.position
          : status.positionMillis;
        const isPlaying = (props.isSeeking() );
      }
    },
  }),

After this, you have to implement a handler for the audio playback. If a sound instance is already created, you need to unload the media from memory by calling playbackInstance.unloadAsync() and clear OnPlaybackStatusUpdate:

loadPlaybackInstance: props => async (shouldPlay) => 
      props.setState( isLoading: true );

      if (props.playbackInstance() !== null) 
        await props.playbackInstance().unloadAsync();
        props.playbackInstance().setOnPlaybackStatusUpdate(null);
        props.setClassVariable( playbackInstance: null );
      }
      const  sound  = await Audio.Sound.create(
         uri: props.playingAudio().audioUrl ,
         shouldPlay, position: 0, duration: 1, progressUpdateIntervalMillis: 50 ,
        props.soundCallback,
      );

      props.setClassVariable( playbackInstance: sound );

      props.setState( isLoading: false );
    },

Call the handler loadPlaybackInstance(true) by clicking the item in the list. It will automatically load and play the audio.

Let’s add the pause and play functionality (toggle playing) to the audio player. If audio is already playing, you can pause it with the help of playbackInstance.pauseAsync(). If audio is paused, you can resume playback from the paused point with the help of the playbackInstance.playAsync() method:

onTogglePlaying: props => () => 
      if (props.playbackInstance() !== null) 
        if (props.isPlaying) 
          props.playbackInstance().pauseAsync();
         else 
          props.playbackInstance().playAsync();
        
      }
    },

When you click on the playing item, it should stop. If you want to stop audio playback and put it into the 0 playing position, you can use the method playbackInstance.stopAsync():

onStop: props => () => 
      if (props.playbackInstance() !== null) 
        props.playbackInstance().stopAsync();

        props.setShowPlayer(false);
        props.setClassVariable( playingAudio: null );
      }
    },

The audio player also allows you to rewind the audio with the help of the slider. When you start sliding, the audio playback should be paused with playbackInstance.pauseAsync().

After the sliding is complete, you can set the audio playing position with the help of playbackInstance.setPositionAsync(value), or play back the audio from the set position with playbackInstance.playFromPositionAsync(value):

onCompleteSliding: props => async (value) => 
      if (props.playbackInstance() !== null) 
        if (props.shouldPlayAtEndOfSeek) 
          await props.playbackInstance().playFromPositionAsync(value);
         else 
          await props.playbackInstance().setPositionAsync(value);
        
        props.setClassVariable( isSeeking: false );
      }
    },

After this, you can pass the props to the components MediaList and AudioPlayer (see the file src/screens/LibraryScreen/LibraryScreenView).

Video Recording Functionality With React Native

Let’s proceed to video recording.

We’ll use Expo.Camera for this purpose. Expo.Camera is a React component that renders a preview of the device’s front or back camera. Expo.Camera can also take photos and record videos that are saved to the app’s cache.

To record video, you need permission for access to the camera and microphone. Let’s add the request for camera access as we did with the audio recording (in the file src/index.js):

await Permissions.askAsync(Permissions.CAMERA);

Video recording is available on the “Video Recording” screen. After switching to this screen, the camera will turn on.

You can change the camera type (front or back) and start video recording. During recording, you can see its general duration and can cancel or stop it. When recording is finished, you will have to type the name of the video, after which it will be saved in the Redux store.

Here is what my video recording UI looks like:


Large preview

Let’s write the video recording logic by using Recompose on the container screen

src/screens/RecordVideoScreen/RecordVideoScreenContainer.

You can see the full list of all props in the Expo.Camera component in the document.

In this application, we will use the following props for Expo.Camera.

  • type: The camera type is set (front or back).
  • onCameraReady: This callback is invoked when the camera preview is set. You won’t be able to start recording if the camera is not ready.
  • style: This sets the styles for the camera container. In this case, the size is 4:3.
  • ref: This is used for direct access to the camera component.

Let’s add the variable for saving the type and handler for its changing.

cameraType: Camera.Constants.Type.back,
toggleCameraType: state => () => (
      cameraType: state.cameraType === Camera.Constants.Type.front
        ? Camera.Constants.Type.back
        : Camera.Constants.Type.front,
    ),

Let’s add the variable for saving the camera ready state and callback for onCameraReady.

isCameraReady: false,

setCameraReady: () => () => ( isCameraReady: true ),

Let’s add the variable for saving the camera component reference and setter.

cameraRef: null,

setCameraRef: () => cameraRef => ( cameraRef ),

Let’s pass these variables and handlers to the camera component.

<Camera
          type=cameraType
          onCameraReady=setCameraReady
          style=s.camera
          ref=setCameraRef
        />

Now, when calling toggleCameraType after clicking the button, the camera will switch from the front to the back.

Currently, we have access to the camera component via the reference, and we can start video recording with the help of cameraRef.recordAsync().

The method recordAsync starts recording a video to be saved to the cache directory.

Arguments:

Options (object) — a map of options:

  • quality (VideoQuality): Specify the quality of recorded video. Usage: Camera.Constants.VideoQuality[‘‘], possible values: for 16:9 resolution 2160p, 1080p, 720p, 480p (Android only) and for 4:3 (the size is 640×480). If the chosen quality is not available for the device, choose the highest one.
  • maxDuration (number): Maximum video duration in seconds.
  • maxFileSize (number): Maximum video file size in bytes.
  • mute (boolean): If present, video will be recorded with no sound.

recordAsync returns a promise that resolves to an object containing the video file’s URI property. You will need to save the file’s URI in order to play back the video hereafter. The promise is returned if stopRecording was invoked, one of maxDuration and maxFileSize is reached or the camera preview is stopped.

Because the ratio set for the camera component sides is 4:3, let’s set the same format for the video quality.

Here is what the handler for starting video recording looks like (see the full code of the container in the repository):

onStartRecording: props => async () => 
      if (props.isCameraReady) 
        props.setState( isRecording: true, fileUrl: null );
        props.setVideoDuration();
        props.cameraRef.recordAsync( quality: '4:3' )
          .then((file) => 
            props.setState( fileUrl: file.uri );
          });
      }
    },

During the video recording, we can’t receive the recording status as we have done for audio. That’s why I have created a function to set video duration.

To stop the video recording, we have to call the following function:

stopRecording: props => () => 
      if (props.isRecording) 
        props.cameraRef.stopRecording();
        props.setState( isRecording: false );
        clearInterval(props.interval);
      }
    },

Check out the entire process of video recording.

Video Playback Functionality With React Native

You can play back the video on the “Library” screen. Video notes are located in the “Video” tab.

To start the video playback, click the selected item in the list. Then, switch to the playback screen, where you can watch or delete the video.

The UI for video playback looks like this:


Large preview

To play back the video, use Expo.Video, a component that displays a video inline with the other React Native UI elements in your app.

The video will be displayed on the separate screen, PlayVideo.

You can check out all of the props for Expo.Video here.

In our application, the Expo.Video component uses native playback controls and looks like this:

<Video
        source= uri: videoUrl }
        style=s.video
        shouldPlay=isPlaying
        resizeMode="contain"
        useNativeControls=isPlaying
        onLoad=onLoad
        onError=onError
      />
  • source

    This is the source of the video data to display. The same forms as for Expo.Audio.Sound are supported.

  • resizeMode

    This is a string describing how the video should be scaled for display in the component view’s bounds. It can be “stretch”, “contain” or “cover”.

  • shouldPlay

    This boolean describes whether the media is supposed to play.

  • useNativeControls

    This boolean, if set to true, displays native playback controls (such as play and pause) within the video component.

  • onLoad

    This function is called once the video has been loaded.

  • onError

    This function is called if loading or playback has encountered a fatal error. The function passes a single error message string as a parameter.

When the video is uploaded, the play button should be rendered on top of it.

When you click the play button, the video turns on and the native playback controls are displayed.

Let’s write the logic of the video using Recompose in the screen container src/screens/PlayVideoScreen/PlayVideoScreenContainer:

const defaultState = 
  isError: false,
  isLoading: false,
  isPlaying: false,
;

const enhance = compose(
  paramsToProps('videoUrl'),
  withStateHandlers(
    ...defaultState,
    isLoading: true,
  , 
    onError: () => () => ( ...defaultState, isError: true ),
    onLoad: () => () => defaultState,
   onTogglePlaying: ( isPlaying ) => () => ( ...defaultState, isPlaying: !isPlaying ),
  }),
);

As previously mentioned, the Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback. That’s why you can create custom controls and use more advanced functionality with the Playback API.

Check out the video playback process:

See the full code for the application in the repository.

You can also install the app on your phone by using Expo and check out how it works in practice.

Wrapping Up

I hope you have enjoyed this article and have enriched your knowledge of React Native. You can use this audio and video recording tutorial to create your own custom-designed media player. You can also scale the functionality and add the ability to save media in the phone’s memory or on a server, synchronize media data between different devices, and share media with others.

As you can see, there is a wide scope for imagination. If you have any questions about the process of developing an audio or video recording app with React Native, feel free to drop a comment below.

Smashing Editorial
(da, lf, ra, yk, al, il)


Original article: 

How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial

Designing For Accessibility And Inclusion




Designing For Accessibility And Inclusion

Steven Lambert



“Accessibility is solved at the design stage.” This is a phrase that Daniel Na and his team heard over and over again while attending a conference. To design for accessibility means to be inclusive to the needs of your users. This includes your target users, users outside of your target demographic, users with disabilities, and even users from different cultures and countries. Understanding those needs is the key to crafting better and more accessible experiences for them.

One of the most common problems when designing for accessibility is knowing what needs you should design for. It’s not that we intentionally design to exclude users, it’s just that “we don’t know what we don’t know.” So, when it comes to accessibility, there’s a lot to know.

How do we go about understanding the myriad of users and their needs? How can we ensure that their needs are met in our design? To answer these questions, I have found that it is helpful to apply a critical analysis technique of viewing a design through different lenses.

“Good [accessible] design happens when you view your [design] from many different perspectives, or lenses.”

The Art of Game Design: A Book of Lenses

A lens is “a narrowed filter through which a topic can be considered or examined.” Often used to examine works of art, literature, or film, lenses ask us to leave behind our worldview and instead view the world through a different context.

For example, viewing art through a lens of history asks us to understand the “social, political, economic, cultural, and/or intellectual climate of the time.” This allows us to better understand what world influences affected the artist and how that shaped the artwork and its message.

Accessibility lenses are a filter that we can use to understand how different aspects of the design affect the needs of the users. Each lens presents a set of questions to ask yourself throughout the design process. By using these lenses, you will become more inclusive to the needs of your users, allowing you to design a more accessible user experience for all.

The Lenses of Accessibility are:

You should know that not every lens will apply to every design. While some can apply to every design, others are more situational. What works best in one design may not work for another.

The questions provided by each lens are merely a tool to help you understand what problems may arise. As always, you should test your design with users to ensure it’s usable and accessible to them.

Lens Of Animation And Effects

Effective animations can help bring a page and brand to life, guide the users focus, and help orient a user. But animations are a double-edged sword. Not only can misusing animations cause confusion or be distracting, but they can also be potentially deadly for some users.

Fast flashing effects (defined as flashing more than three times a second) or high-intensity effects and patterns can cause seizures, known as ‘photosensitive epilepsy.’ Photosensitivity can also cause headaches, nausea, and dizziness. Users with photosensitive epilepsy have to be very careful when using the web as they never know when something might cause a seizure.

Other effects, such as parallax or motion effects, can cause some users to feel dizzy or experience vertigo due to vestibular sensitivity. The vestibular system controls a person’s balance and sense of motion. When this system doesn’t function as it should, it causes dizziness and nausea.

“Imagine a world where your internal gyroscope is not working properly. Very similar to being intoxicated, things seem to move of their own accord, your feet never quite seem to be stable underneath you, and your senses are moving faster or slower than your body.”

A Primer To Vestibular Disorders

Constant animations or motion can also be distracting to users, especially to users who have difficulty concentrating. GIFs are notably problematic as our eyes are drawn towards movement, making it easy to be distracted by anything that updates or moves constantly.

This isn’t to say that animation is bad and you shouldn’t use it. Instead you should understand why you’re using the animation and how to design safer animations. Generally speaking, you should try to design animations that cover small distances, match direction and speed of other moving objects (including scroll), and are relatively small to the screen size.

You should also provide controls or options to cater the experience for the user. For example, Slack lets you hide animated images or emojis as both a global setting and on a per image basis.

To use the Lens of Animation and Effects, ask yourself these questions:

  • Are there any effects that could cause a seizure?
  • Are there any animations or effects that could cause dizziness or vertigo through use of motion?
  • Are there any animations that could be distracting by constantly moving, blinking, or auto-updating?
  • Is it possible to provide controls or options to stop, pause, hide, or change the frequency of any animations or effects?

Lens Of Audio And Video

Autoplaying videos and audio can be pretty annoying. Not only do they break a users concentration, but they also force the user to hunt down the offending media and mute or stop it. As a general rule, don’t autoplay media.

“Use autoplay sparingly. Autoplay can be a powerful engagement tool, but it can also annoy users if undesired sound is played or they perceive unnecessary resource usage (e.g. data, battery) as the result of unwanted video playback.”

Google Autoplay guidelines

You’re now probably asking, “But what if I autoplay the video in the background but keep it muted?” While using videos as backgrounds may be a growing trend in today’s web design, background videos suffer from the same problems as GIFs and constant moving animations: they can be distracting. As such, you should provide controls or options to pause or disable the video.

Along with controls, videos should have transcripts and/or subtitles so users can consume the content in a way that works best for them. Users who are visually impaired or who would rather read instead of watch the video need a transcript, while users who aren’t able to or don’t want to listen to the video need subtitles.

To use the Lens of Audio and Video, ask yourself these questions:

  • Are there any audio or video that could be annoying by autoplaying?
  • Is it possible to provide controls to stop, pause, or hide any audio or videos that autoplay?
  • Do videos have transcripts and/or subtitles?

Lens Of Color

Color plays an important part in a design. Colors evoke emotions, feelings, and ideas. Colors can also help strengthen a brand’s message and perception. Yet the power of colors is lost when a user can’t see them or perceives them differently.

Color blindness affects roughly 1 in 12 men and 1 in 200 women. Deuteranopia (red-green color blindness) is the most common form of color blindness, affecting about 6% of men. Users with red-green color blindness typically perceive reds, greens, and oranges as yellowish.


Color Blindness Reference Chart for Deuternaopia, Protanopia, and Tritanopia


Deuteranopia (green color blindness) is common and causes reds to appear brown/yellow and greens to appear beige. Protanopia (red color blindness) is rare and causes reds to appear dark/black and orange/greens to appear yellow. Tritanopia (blue-yellow colorblindness) is very rare and cases blues to appear more green/teal and yellows to appear violet/grey. (Source) (Large preview)

Color meaning is also problematic for international users. Colors mean different things in different countries and cultures. In Western cultures, red is typically used to represent negative trends and green positive trends, but the opposite is true in Eastern and Asian cultures.

Because colors and their meanings can be lost either through cultural differences or color blindness, you should always add a non-color identifier. Identifiers such as icons or text descriptions can help bridge cultural differences while patterns work well to distinguish between colors.


Six colored labels. Five use a pattern while the sixth doesn’t


Trello’s color blind friendly labels use different patterns to distinguish between the colors. (Large preview)

Oversaturated colors, high contrasting colors, and even just the color yellow can be uncomfortable and unsettling for some users, prominently those on the autism spectrum. It’s best to avoid high concentrations of these types of colors to help users remain comfortable.

Poor contrast between foreground and background colors make it harder to see for users with low vision, using a low-end monitor, or who are just in direct sunlight. All text, icons, and any focus indicators used for users using a keyboard should meet a minimum contrast ratio of 4.5:1 to the background color.

You should also ensure your design and colors work well in different settings of Windows High Contrast mode. A common pitfall is that text becomes invisible on certain high contrast mode backgrounds.

To use the Lens of Color, ask yourself these questions:

  • If the color was removed from the design, what meaning would be lost?
  • How could I provide meaning without using color?
  • Are any colors oversaturated or have high contrast that could cause users to become overstimulated or uncomfortable?
  • Does the foreground and background color of all text, icons, and focus indicators meet contrast ratio guidelines of 4.5:1?

Lens Of Controls

Controls, also called ‘interactive content,’ are any UI elements that the user can interact with, be they buttons, links, inputs, or any HTML element with an event listener. Controls that are too small or too close together can cause lots of problems for users.

Small controls are hard to click on for users who are unable to be accurate with a pointer, such as those with tremors, or those who suffer from reduced dexterity due to age. The default size of checkboxes and radio buttons, for example, can pose problems for older users. Even when a label is provided that could be clicked on instead, not all users know they can do so.

Controls that are too close together can cause problems for touch screen users. Fingers are big and difficult to be precise with. Accidentally touching the wrong control can cause frustration, especially if that control navigates you away or makes you lose your context.


Tweet that says Software being Done is like lawn being Mowed. Jim Benson


When touching a single line tweet, it’s very easy to accidentally click the person’s name or handle instead of opening the tweet because there’s not enough space between them. (Source) (Large preview)

Controls that are nested inside another control can also contribute to touch errors. Not only is it not allowed in the HTML spec, it also makes it easy to accidentally select the parent control instead of the one you wanted.

To give users enough room to accurately select a control, the recommended minimum size for a control is 34 by 34 device independent pixels, but Google recommends at least 48 by 48 pixels, while the WCAG spec recommends at least 44 by 44 pixels. This size also includes any padding the control has. So a control could visually be 24 by 24 pixels but with an additional 10 pixels of padding on all sides would bring it up to 44 by 44 pixels.

It’s also recommended that controls be placed far enough apart to reduce touch errors. Microsoft recommends at least 8 pixels of spacing while Google recommends controls be spaced at least 32 pixels apart.

Controls should also have a visible text label. Not only do screen readers require the text label to know what the control does, but it’s been shown that text labels help all users better understand a controls purpose. This is especially important for form inputs and icons.

To use the Lens of Controls, ask yourself these questions:

  • Are any controls not large enough for someone to touch?
  • Are any controls too close together that would make it easy to touch the wrong one?
  • Are there any controls inside another control or clickable region?
  • Do all controls have a visible text label?

Lens Of Font

In the early days of the web, we designed web pages with a font size between 9 and 14 pixels. This worked out just fine back then as monitors had a relatively known screen size. We designed thinking that the browser window was a constant, something that couldn’t be changed.

Technology today is very different than it was 20 years ago. Today, browsers can be used on any device of any size, from a small watch to a huge 4K screen. We can no longer use fixed font sizes to design our sites. Font sizes must be as responsive as the design itself.

Not only should the font sizes be responsive, but the design should be flexible enough to allow users to customize the font size, line height, or letter spacing to a comfortable reading level. Many users make use of custom CSS that helps them have a better reading experience.

The font itself should be easy to read. You may be wondering if one font is more readable than another. The truth of the matter is that the font doesn’t really make a difference to readability. Instead it’s the font style that plays an important role in a fonts readability.

Decorative or cursive font styles are harder to read for many users, but especially problematic for users with dyslexia. Small font sizes, italicized text, and all uppercase text are also difficult for users. Overall, larger text, shorter line lengths, taller line heights, and increased letter spacing can help all users have a better reading experience.

To use the Lens of Font, ask yourself these questions:

  • Is the design flexible enough that the font could be modified to a comfortable reading level by the user?
  • Is the font style easy to read?

Lens Of Images and Icons

They say, “A picture is worth a thousand words.” Still, a picture you can’t see is speechless, right?

Images can be used in a design to convey a specific meaning or feeling. Other times they can be used to simplify complex ideas. Whichever the case for the image, a user who uses a screen reader needs to be told what the meaning of the image is.

As the designer, you understand best the meaning or information the image conveys. As such, you should annotate the design with this information so it’s not left out or misinterpreted later. This will be used to create the alt text for the image.

How you describe an image depends entirely on context, or how much textual information is already available that describes the information. It also depends on if the image is just for decoration, conveys meaning, or contains text.

“You almost never describe what the picture looks like, instead you explain the information the picture contains.”

Five Golden Rules for Compliant Alt Text

Since knowing how to describe an image can be difficult, there’s a handy decision tree to help when deciding. Generally speaking, if the image is decorational or there’s surrounding text that already describes the image’s information, no further information is needed. Otherwise you should describe the information of the image. If the image contains text, repeat the text in the description as well.

Descriptions should be succinct. It’s recommended to use no more than two sentences, but aim for one concise sentence when possible. This allows users to quickly understand the image without having to listen to a lengthy description.

As an example, if you were to describe this image for a screen reader, what would you say?


Vincent van Gogh’s The Starry Night


Source (Large preview)

Since we describe the information of the image and not the image itself, the description could be Vincent van Gogh’s The Starry Night since there is no other surrounding context that describes it. What you shouldn’t put is a description of the style of the painting or what the picture looks like.

If the information of the image would require a lengthy description, such as a complex chart, you shouldn’t put that description in the alt text. Instead, you should still use a short description for the alt text and then provide the long description as either a caption or link to a different page.

This way, users can still get the most important information quickly but have the ability to dig in further if they wish. If the image is of a chart, you should repeat the data of the chart just like you would for text in the image.

If the platform you are designing for allows users to upload images, you should provide a way for the user to enter the alt text along with the image. For example, Twitter allows its users to write alt text when they upload an image to a tweet.

To use the Lens of Images and Icons, ask yourself these questions:

  • Does any image contain information that would be lost if it was not viewable?
  • How could I provide the information in a non-visual way?
  • If the image is controlled by the user, is it possible to provide a way for them to enter the alt text description?

Lens Of Keyboard

Keyboard accessibility is among the most important aspects of an accessible design, yet it is also among the most overlooked.

There are many reasons why a user would use a keyboard instead of a mouse. Users who use a screen reader use the keyboard to read the page. A user with tremors may use a keyboard because it provides better accuracy than a mouse. Even power users will use a keyboard because it’s faster and more efficient.

A user using a keyboard typically uses the tab key to navigate to each control in sequence. A logical order for the tab order greatly helps users know where the next key press will take them. In western cultures, this usually means from left to right, top to bottom. Unexpected tab orders results in users becoming lost and having to scan frantically for where the focus went.

Sequential tab order also means that they must tab through all controls that are before the one that they want. If that control is tens or hundreds of keystrokes away, it can be a real pain point for the user.

By making the most important user flows nearer to the top of the tab order, we can help enable our users to be more efficient and effective. However, this isn’t always possible nor practical to do. In these cases, providing a way to quickly jump to a particular flow or content can still allow them to be efficient. This is why “skip to content” links are helpful.

A good example of this is Facebook which provides a keyboard navigation menu that allows users to jump to specific sections of the site. This greatly speeds up the ability for a user to interact with the page and the content they want.


facebook


Facebook provides a way for all keyboard users to jump to specific sections of the page, or other pages within Facebook, as well as an Accessibility Help menu. (Large preview)

When tabbing through a design, focus styles should always be visible or a user can easily become lost. Just like an unexpected tab order, not having good focus indicators results in users not knowing what is currently focused and having to scan the page.

Changing the look of the default focus indicator can sometimes improve the experience for users. A good focus indicator doesn’t rely on color alone to indicate focus (Lens of Color), and should be distinct enough to easily allow the user to find it. For example, a blue focus ring around a similarly colored blue button may not be visually distinct to discern that it is focused.

Although this lens focuses on keyboard accessibility, it’s important to note that it applies to any way a user could interact with a website without a mouse. Devices such as mouth sticks, switch access buttons, sip and puff buttons, and eye tracking software all require the page to be keyboard accessible.

By improving keyboard accessibility, you allow a wide range of users better access to your site.

To use the Lens of Keyboard, ask yourself these questions:

  • What keyboard navigation order makes the most sense for the design?
  • How could a keyboard user get to what they want in the quickest way possible?
  • Is the focus indicator always visible and visually distinct?

Lens Of Layout

Layout contributes a great deal to the usability of a site. Having a layout that is easy to follow with easy to find content makes all the difference to your users. A layout should have a meaningful and logical sequence for the user.

With the advent of CSS Grid, being able to change the layout to be more meaningful based on the available space is easier than ever. However, changing the visual layout creates problems for users who rely on the structural layout of the page.

The structural layout is what is used by screen readers and users using a keyboard. When the visual layout changes but not the underlying structural layout, these users can become confused as their tab order is no longer logical. If you must change the visual layout, you should do so by changing the structural layout so users using a keyboard maintain a sequential and logical tab order.

The layout should be resizable and flexible to a minimum of 320 pixels with no horizontal scroll bars so that it can be viewed comfortably on a phone. The layout should also be flexible enough to be zoomed in to 400% (also with no horizontal scroll bars) for users who need to increase the font size for a better reading experience.

Users using a screen magnifier benefit when related content is in close proximity to one another. A screen magnifier only provides the user with a small view of the entire layout, so content that is related but far away, or changes far away from where the interaction occurred is hard to find and can go unnoticed.

GIF of CodePen showing that clicking on a button does not update the interface
When performing a search on CodePen, the search button is in the top right corner of the page. Clicking the button reveals a large search input on the opposite side of the screen. A user using a screen magnifier would be hard pressed to notice the change and would think the button doesn’t work. (Large preview)

To use the Lens of Layout, ask yourself these questions:

  • Does the layout have a meaningful and logical sequence?
  • What should happen to the layout when it’s viewed on a small screen or zoomed in to 400%?
  • Is content that is related or changes due to user interaction in close proximity to one another?

Lens Of Material Honesty

Material honesty is an architectural design value that states that a material should be honest to itself and not be used as a substitute for another material. It means that concrete should look like concrete and not be painted or sculpted to look like bricks.

Material honesty values and celebrates the unique properties and characteristics of each material. An architect who follows material honesty knows when each material should be used and how to use it without tarnishing itself.

Material honesty is not a hard and fast rule though. It lies on a continuum. Like all values, you are allowed to break them when you understand them. As the saying goes, they are “more what you’d call “guidelines” than actual rules.”

When applied to web design, material honesty means that one element or component shouldn’t look, behave, or function as if it were another element or component. Doing so would cheat the user and could lead to confusion. A common example of this are buttons that look like links or links that look like buttons.

Links and buttons have different behaviors and affordances. A link is activated with the enter key, typically takes you to a different page, and has a special context menu on right click. Buttons are activated with the space key, used primarily to trigger interactions on the current page, and have no such context menu.

When a link is styled to look like a button or vise versa, a user could become confused as it does not behave and function as it looks. If the “button” navigates the user away unexpectedly, they might become frustrated if they lost data in the process.

“At first glance everything looks fine, but it won’t stand up to scrutiny. As soon as such a website is stress‐tested by actual usage across a range of browsers, the façade crumbles.”

Resilient Web Design

Where this becomes the most problematic is when a link and button are styled the same and are placed next to one another. As there is nothing to differentiate between the two, a user can accidentally navigate when they thought they wouldn’t.


Three links and/or buttons shown inline with text


Can you tell which one of these will navigate you away from the page and which won’t? (Large preview)

When a component behaves differently than expected, it can easily lead to problems for users using a keyboard or screen reader. An autocomplete menu that is more than an autocomplete menu is one such example.

Autocomplete is used to suggest or predict the rest of a word a user is typing. An autocomplete menu allows a user to select from a large list of options when not all options can be shown.

An autocomplete menu is typically attached to an input field and is navigated with the up and down arrow keys, keeping the focus inside the input field. When a user selects an option from the list, that option will override the text in the input field. Autocomplete menus are meant to be lists of just text.

The problem arises when an autocomplete menu starts to gain more behaviors. Not only can you select an option from the list, but you can edit it, delete it, or even expand or collapse sections. The autocomplete menu is no longer just a simple list of selectable text.




With the addition of edit, delete, and profile buttons, this autocomplete menu is materially dishonest. (Large preview)

The added behaviors no longer mean you can just use the up and down arrows to select an option. Each option now has more than one action, so a user needs to be able to traverse two dimensions instead of just one. This means that a user using a keyboard could become confused on how to operate the component.

Screen readers suffer the most from this change of behavior as there is no easy way to help them understand it. A lot of work will be required to ensure the menu is accessible to a screen reader by using non-standard means. As such, it will might result in a sub-par or inaccessible experience for them.

To avoid these issues, it’s best to be honest to the user and the design. Instead of combining two distinct behaviors (an autocomplete menu and edit and delete functionality), leave them as two separate behaviors. Use an autocomplete menu to just autocomplete the name of a user, and have a different component or page to edit and delete users.

To use the Lens of Material Honesty, ask yourself these questions:

  • Is the design being honest to the user?
  • Are there any elements that behave, look, or function as another element?
  • Are there any components that combine distinct behaviors into a single component? Does doing so make the component materially dishonest?

Lens Of Readability

Have you ever picked up a book only to get a few paragraphs or pages in and want to give up because the text was too hard to read? Hard to read content is mentally taxing and tiring.

Sentence length, paragraph length, and complexity of language all contribute to how readable the text is. Complex language can pose problems for users, especially those with cognitive disabilities or who aren’t fluent in the language.

Along with using plain and simple language, you should ensure each paragraph focuses on a single idea. A paragraph with a single idea is easier to remember and digest. The same is true of a sentence with fewer words.

Another contributor to the readability of content is the length of a line. The ideal line length is often quoted to be between 45 and 75 characters. A line that is too long causes users to lose focus and makes it harder to move to the next line correctly, while a line that is too short causes users to jump too often, causing fatigue on the eyes.

“The subconscious mind is energized when jumping to the next line. At the beginning of every new line the reader is focused, but this focus gradually wears off over the duration of the line”

— Typographie: A Manual of Design

You should also break up the content with headings, lists, or images to give mental breaks to the reader and support different learning styles. Use headings to logically group and summarize the information. Headings, links, controls, and labels should be clear and descriptive to enhance the users ability to comprehend.

To use the Lens of Readability, ask yourself these questions:

  • Is the language plain and simple?
  • Does each paragraph focus on a single idea?
  • Are there any long paragraphs or long blocks of unbroken text?
  • Are all headings, links, controls, and labels clear and descriptive?

Lens Of Structure

As mentioned in the Lens of Layout, the structural layout is what is used by screen readers and users using a keyboard. While the Lens of Layout focused on the visual layout, the Lens of Structure focuses on the structural layout, or the underlying HTML and semantics of the design.

As a designer, you may not write the structural layout of your designs. This shouldn’t stop you from thinking about how your design will ultimately be structured though. Otherwise, your design may result in an inaccessible experience for a screen reader.

Take for example a design for a single elimination tournament bracket.


Eight person tournament bracket featuring George, Fred, Linus, Lucy, Jack, Jill, Fred, and Ginger. Ginger ultimately wins against George.


Large preview

How would you know if this design was accessible to a user using a screen reader? Without understanding structure and semantics, you may not. As it stands, the design would probably result in an inaccessible experience for a user using a screen reader.

To understand why that is, we first must understand that a screen reader reads a page and its content in sequential order. This means that every name in the first column of the tournament would be read, followed by all the names in the second column, then third, then the last.

“George, Fred, Linus, Lucy, Jack, Jill, Fred, Ginger, George, Lucy, Jack, Ginger, George, Ginger, Ginger.”

If all you had was a list of seemingly random names, how would you interpret the results of the tournament? Could you say who won the tournament? Or who won game 6?

With nothing more to work with, a user using a screen reader would probably be a bit confused about the results. To be able to understand the visual design, we must provide the user with more information in the structural design.

This means that as a designer you need to know how a screen reader interacts with the HTML elements on a page so you know how to enhance their experience.

  • Landmark Elements (header, nav, main, and footer)
    Allow a screen reader to jump to important sections in the design.
  • Headings (h1h6)
    Allow a screen reader to scan the page and get a high level overview. Screen readers can also jump to any heading.
  • Lists (ul and ol)
    Group related items together, and allow a screen reader to easily jump from one item to another.
  • Buttons
    Trigger interactions on the current page.
  • Links
    Navigate or retrieve information.
  • Form labels
    Tell screen readers what each form input is.

Knowing this, how might we provide more meaning to a user using a screen reader?

To start, we could group each column of the tournament into rounds and use headings to label each round. This way, a screen reader would understand when a new round takes place.

Next, we could help the user understand which players are playing against each other each game. We can again use headings to label each game, allowing them to find any game they might be interested in.

By just adding headings, the content would read as follows:

“__Round 1, Game 1__, George, Fred, __Game 2__, Linus, Lucy, __Game 3__, Jack, Jill, __Game 4__, Fred, Ginger, __Round 2, Game 5__, George, Lucy, __Game 6__, Jack, Ginger, __Round 3__, __Game 7__, George, Ginger, __Winner__, Ginger.”

This is already a lot more understandable than before.

The information still doesn’t answer who won a game though. To know that, you’d have to understand which game a winner plays next to see who won the previous game. For example, you’d have to know that the winner of game four plays in game six to know who advanced from game four.

We can further enhance the experience by informing the user who won each game so they don’t have to go hunting for it. Putting the text “(winner)” after the person who won the round would suffice.

We should also further group the games and rounds together using lists. Lists provide the structural semantics of the design, essentially informing the user of the connected nodes from the visual design.

If we translate this back into a visual design, the result could look as follows:


The tournament bracket


The tournament with descriptive headings and winner information (shown here with grey background). (Large preview)

Since the headings and winner text are redundant in the visual design, you could hide them just from visual users so the end visual result looks just like the first design.

“If the end result is visually the same as where we started, why did we go through all this?” You may ask.

The reason is that you should always annotate your design with all the necessary structural design requirements needed for a better screen reader experience. This way, the person who implements the design knows to add them. If you had just handed the first design to the implementer, it would more than likely end up inaccessible.

To use the Lens of Structure, ask yourself these questions:

  • Can I outline a rough HTML structure of my design?
  • How can I structure the design to better help a screen reader understand the content or find the content they want?
  • How can I help the person who will implement the design understand the intended structure?

Lens Of Time

Periodically in a design you may need to limit the amount of time a user can spend on a task. Sometimes it may be for security reasons, such as a session timeout. Other times it could be due to a non-functional requirement, such as a time constrained test.

Whatever the reason, you should understand that some users may need more time in order finish the task. Some users might need more time to understand the content, others might not be able to perform the task quickly, and a lot of the time they could just have been interrupted.

“The designer should assume that people will be interrupted during their activities”

— The Design of Everyday Things

Users who need more time to perform an action should be able to adjust or remove a time limit when possible. For example, with a session timeout you could alert the user when their session is about to expire and allow them to extend it.

To use the Lens of Time, ask yourself this question:

  • Is it possible to provide controls to adjust or remove time limits?

Bringing It All Together

So now that you’ve learned about the different lenses of accessibility through which you can view your design, what do you do with them?

The lenses can be used at any point in the design process, even after the design has been shipped to your users. Just start with a few of them at hand, and one at a time carefully analyze the design through a lens.

Ask yourself the questions and see if anything should be adjusted to better meet the needs of a user. As you slowly make changes, bring in other lenses and repeat the process.

By looking through your design one lens at a time, you’ll be able to refine the experience to better meet users’ needs. As you are more inclusive to the needs of your users, you will create a more accessible design for all your users.

Using lenses and insightful questions to examine principles of accessibility was heavily influenced by Jesse Schell and his book “The Art of Game Design: A Book of Lenses.”

Smashing Editorial
(il, ra, yk)


Taken from – 

Designing For Accessibility And Inclusion

Getting Started With The Web MIDI API

As the web continues to evolve and new browser technologies continue to emerge, the line between native and web development becomes more and more blurred. New APIs are unlocking the ability to code entirely new categories of software in the browser.

Until recently, the ability to interact with digital musical instruments has been limited to native, desktop applications. The Web MIDI API is here to change that.

In this article, we’ll cover the basics of MIDI and the Web MIDI API to see how simple it can be to create a web app that responds to musical input using JavaScript.

What Is MIDI?

MIDI has been around for a long time but has only recently made its debut in the browser. MIDI (Musical Instrument Digital Interface) is a technical standard that was first published in 1983 and created the means for digital instruments, synthesizers, computers, and various audio devices to communicate with each other. MIDI messages relay musical and time-based information back and forth between devices.

A typical MIDI setup might consist of a digital piano keyboard which can send messages relaying information such as pitch, velocity (how loudly or softly a note is played), vibrato, volume, panning, modulation, and more to a sound synthesizer which converts that into audible sound. The keyboard could also send its signals to desktop music scoring software or a digital audio workstation (DAW) which could then convert the signals into written notation, save them as a sound file, and more.

MIDI is a fairly versatile protocol, too. In addition to playing and recording music, it has become a standard protocol in stage and theater applications, as well, where it is often used to relay cue information or control lighting equipment.


A performer plays a digital piano onstage


MIDI lets digital instruments, synthesizers, and software talk to each other and is frequently used in live shows and theatrical productions. Photo by Puk Khantho on Unsplash.

MIDI In The Browser

The WebMIDI API brings all the utility of MIDI to the browser with some pretty simple JavaScript. We only need to learn about a few new methods and objects.

Introduction

First, there’s the navigator.requestMIDIAccess() method. It does exactly what it sounds like—it will request access to any MIDI devices (inputs or outputs) connected to your computer. You can confirm the browser supports the API by checking for the existence of this method.

if (navigator.requestMIDIAccess) 
    console.log('This browser supports WebMIDI!');
 else 
    console.log('WebMIDI is not supported in this browser.');

Second, there’s the MIDIAccess object which contains references to all available inputs (such as piano keyboards) and outputs (such as synthesizers). The requestMIDIAccess() method returns a promise, so we need to establish success and failure callbacks. And if the browser is successful in connecting to your MIDI devices, it will return a MIDIAccess object as an argument to the success callback.

navigator.requestMIDIAccess()
    .then(onMIDISuccess, onMIDIFailure);

function onMIDISuccess(midiAccess) 
    console.log(midiAccess);

    var inputs = midi.inputs;
    var outputs = midi.outputs;


function onMIDIFailure() 
    console.log('Could not access your MIDI devices.');

Third, MIDI messages are conveyed back and forth between inputs and outputs with a MIDIMessageEvent object. These messages contain information about the MIDI event such as pitch, velocity (how softly or loudly a note is played), timing, and more. We can start collecting these messages by adding simple callback functions (listeners) to our inputs and outputs.

Going Deeper

Let’s dig in. To send MIDI messages from our MIDI devices to the browser, we’ll start by adding an onmidimessage listener to each input. This callback will be triggered whenever a message is sent by the input device, such as the press of a key on the piano.

We can loop through our inputs and assign the listener like this:

function onMIDISuccess(midiAccess) 
    for (var input of midiAccess.inputs.values())
        input.onmidimessage = getMIDIMessage;
    
}

function getMIDIMessage(midiMessage) 
    console.log(midiMessage);

The MIDIMessageEvent object we get back contains a lot of information, but what we’re most interested in is the data array. This array typically contains three values (e.g. [144, 72, 64]). The first value tells us what type of command was sent, the second is the note value, and the third is velocity. The command type could be either “note on,” “note off,” controller (such as pitch bend or piano pedal), or some other kind of system exclusive (“sysex”) event unique to that device/manufacturer.

For the purposes of this article, we’ll just focus on properly identifying “note on” and “note off” messages. Here are the basics:

  • A command value of 144 signifies a “note on” event, and 128 typically signifies a “note off” event.
  • Note values are on a range from 0–127, lowest to highest. For example, the lowest note on an 88-key piano has a value of 21, and the highest note is 108. A “middle C” is 60.
  • Velocity values are also given on a range from 0–127 (softest to loudest). The softest possible “note on” velocity is 1.
  • A velocity of 0 is sometimes used in conjunction with a command value of 144 (which typically represents “note on”) to indicate a “note off” message, so it’s helpful to check if the given velocity is 0 as an alternate way of interpreting a “note off” message.

Given this knowledge, we can expand our getMIDIMessage handler example above by intelligently parsing our MIDI messages coming from our inputs and passing them along to additional handler functions.

function getMIDIMessage(message) 
    var command = message.data[0];
    var note = message.data[1];
    var velocity = (message.data.length > 2) ? message.data[2] : 0; // a velocity value might not be included with a noteOff command

    switch (command) 
        case 144: // noteOn
            if (velocity > 0) 
                noteOn(note, velocity);
             else 
                noteOff(note);
            
            break;
        case 128: // noteOff
            noteOff(note);
            break;
        // we could easily expand this switch statement to cover other types of commands such as controllers or sysex
    }
}

Browser Compatibility And Polyfill

As of the writing of this article, the Web MIDI API is only available natively in Chrome, Opera, and Android WebView.


Browser support for Web MIDI API from caniuse.com


The Web MIDI API is only available natively in Chrome, Opera, and Android WebView.

For all other browsers that don’t support it natively, Chris Wilson’s WebMIDIAPIShim library is a polyfill for the Web MIDI API, of which Chris is a co-author. Simply including the shim script on your page will enable everything we’ve covered so far.

<script src="WebMIDIAPI.min.js"></script>    
<script>
if (navigator.requestMIDIAccess)  //... returns true
</script>

This shim also requires Jazz-Soft.net’s Jazz-Plugin to work, unfortunately, which means it’s an OK option for developers who want the flexibility to work in multiple browsers, but an extra barrier to mainstream adoption. Hopefully, within time, other browsers will adopt the Web MIDI API natively.

Making Our Job Easier With WebMIDI.js

We’ve only really scratched the surface of what’s possible with the WebMIDI API. Adding support for additional functionality besides basic “note on” and “note off” messages starts to get much more complex.

If you’re looking for a great JavaScript library to radically simplify your code, check out WebMidi.js by Jean-Philippe Côté on Github. This library does a great job of abstracting all the parsing of MIDIAccess and MIDIMessageEvent objects and lets you listen for specific events and add or remove listeners in a much simpler way.

WebMidi.enable(function () 

    // Viewing available inputs and outputs
    console.log(WebMidi.inputs);
    console.log(WebMidi.outputs);

    // Retrieve an input by name, id or index
    var input = WebMidi.getInputByName("My Awesome Keyboard");
    // OR...
    // input = WebMidi.getInputById("1809568182");
    // input = WebMidi.inputs[0];

    // Listen for a 'note on' message on all channels
    input.addListener('noteon', 'all',
        function (e) 
            console.log("Received 'noteon' message (" + e.note.name + e.note.octave + ").");
        
    );

    // Listen to pitch bend message on channel 3
    input.addListener('pitchbend', 3,
        function (e) 
            console.log("Received 'pitchbend' message.", e);
        
    );

    // Listen to control change message on all channels
    input.addListener('controlchange', "all",
        function (e) 
            console.log("Received 'controlchange' message.", e);
        
    );

    // Remove all listeners for 'noteoff' on all channels
    input.removeListener('noteoff');

    // Remove all listeners on the input
    input.removeListener();

});

Real-World Scenario: Building A Breakout Room Controlled By A Piano Keyboard

A few months ago, my wife and I decided to build a “breakout room” experience in our house to entertain our friends and family. We wanted the game to include some kind of special effect to help elevate the experience. Unfortunately, neither of us have mad engineering skills, so building complex locks or special effects with magnets, lasers, or electrical wiring was outside the realm of our expertise. I do, however, know my way around the browser pretty well. And we have a digital piano.

Thus, an idea was born. We decided that the centerpiece of the game would be a series of passcode locks on a computer that players would have to “unlock” by playing certain note sequences on our piano, a la Willy Wonka.

This is a musical lock
This is a musical lock

Sound cool? Here’s how I did it.

Setup

We’ll begin by requesting WebMIDI access, identifying our keyboard, attaching the appropriate event listeners, and creating a few variables and functions to help us step through the various stages of the game.

// Variable which tell us what step of the game we're on. 
// We'll use this later when we parse noteOn/Off messages
var currentStep = 0;

// Request MIDI access
if (navigator.requestMIDIAccess) 
    console.log('This browser supports WebMIDI!');

    navigator.requestMIDIAccess().then(onMIDISuccess, onMIDIFailure);

 else 
    console.log('WebMIDI is not supported in this browser.');


// Function to run when requestMIDIAccess is successful
function onMIDISuccess(midiAccess) 
    var inputs = midiAccess.inputs;
    var outputs = midiAccess.outputs;

    // Attach MIDI event "listeners" to each input
    for (var input of midiAccess.inputs.values()) 
        input.onmidimessage = getMIDIMessage;
    
}

// Function to run when requestMIDIAccess fails
function onMIDIFailure() 
    console.log('Error: Could not access MIDI devices.');


// Function to parse the MIDI messages we receive
// For this app, we're only concerned with the actual note value,
// but we can parse for other information, as well
function getMIDIMessage(message) 
    var command = message.data[0];
    var note = message.data[1];
    var velocity = (message.data.length > 2) ? message.data[2] : 0; // a velocity value might not be included with a noteOff command

    switch (command) 
        case 144: // note on
            if (velocity > 0) 
                noteOn(note);
             else 
                noteOff(note);
            
            break;
        case 128: // note off
            noteOffCallback(note);
            break;
        // we could easily expand this switch statement to cover other types of commands such as controllers or sysex
    }
}

// Function to handle noteOn messages (ie. key is pressed)
// Think of this like an 'onkeydown' event
function noteOn(note) 
    //...


// Function to handle noteOff messages (ie. key is released)
// Think of this like an 'onkeyup' event
function noteOff(note) 
    //...


// This function will trigger certain animations and advance gameplay 
// when certain criterion are identified by the noteOn/noteOff listeners
// For instance, a lock is unlocked, the timer expires, etc.
function runSequence(sequence) 
    //...

Step 1: Press Any Key To Begin

To kick off the game, let’s have the players press any key to begin. This is an easy first step which will clue them into how the game works and also start a countdown timer.

function noteOn(note) 
    switch(currentStep) 
        // If the game hasn't started yet.
        // The first noteOn message we get will run the first sequence
        case 0: 
            // Run our start up sequence
            runSequence('gamestart');

            // Increment the currentStep so this is only triggered once
            currentStep++;
            
            break;
    
}

function runSequence(sequence) 
    switch(sequence) 
        case 'gamestart':            
            // Now we'll start a countdown timer...
            startTimer();
            
            // code to trigger animations, give a clue for the first lock
            break;
    
}

Step 2: Play The Correct Note Sequence

For the first lock, the players must play a particular sequence of notes in the right order. I’ve actually seen this done in a real breakout room, only it was with an acoustic upright piano rigged to a lock box. Let’s re-create the effect with MIDI.

For every “note on” message received, we’ll append the numeric note value to an array and then check to see if that array matches a predefined array of note values.

We’ll assume some clues in the breakout room have told the players which notes to play. For this example, it will be the beginning of the tune to “Amazing Grace” in the key of F major. That note sequence would look like this.


A visual representation of the first nine notes of “Amazing Grace” on a piano


This is the correct sequence of notes that we’ll be listening for as the solution to the first lock.

The MIDI note values in array form would be: [60, 65, 69, 65, 69, 67, 65, 62, 60].

var correctNoteSequence = [60, 65, 69, 65, 69, 67, 65, 62, 60]; // Amazing Grace in F
var activeNoteSequence = [];

function noteOn(note) 
    switch(currentStep) 
        // ... (case 0)

        // The first lock - playing a correct sequence
        case 1:
            activeNoteSequence.push(note);

            // when the array is the same length as the correct sequence, compare the two
            if (activeNoteSequence.length == correctNoteSequence.length) 
                var match = true;
                for (var index = 0; index < activeNoteSequence.length; index++) 
                    if (activeNoteSequence[index] != correctNoteSequence[index]) 
                        match = false;
                        break;
                    
                }

                if (match) 
                    // Run the next sequence and increment the current step
                    runSequence('lock1');
                    currentStep++;
                 else 
                    // Clear the array and start over
                    activeNoteSequence = [];
                
            }
            break;
    }
}

function runSequence(sequence) 
    switch(sequence) 
        // ...

        case 'lock1':
            // code to trigger animations and give clue for the next lock
            break;
    
}

Step 3: Play The Correct Chord

The next lock requires the players to play a combination of notes at the same time. This is where our “note off” listener comes in. For every “note on” message received, we’ll add that note value to an array; for every “note off” message received, we’ll remove that note value from the array. Therefore, this array will reflect which notes are currently being pressed at any time. Then, it’s a matter of checking that array every time a note value is added to see if it matches a master array with the correct values.

For this clue, we’ll make the correct answer a C7 chord in root position starting on middle C. That looks like this.


A visual representation of a C7 chord on a piano


These are the four notes that we’ll be listening for as the solution to the second lock.

The correct MIDI note values for this chord are: [60, 64, 67, 70].

var correctChord = [60, 64, 67, 70]; // C7 chord starting on middle C
var activeChord = [];

function noteOn(note) 
    switch(currentStep) 
        // ... (case 0, 1)

        case 2:
            // add the note to the active chord array
            activeChord.push(note);

            // If the array is the same length as the correct chord, compare
            if (activeChord.length == correctChord.length) 
                var match = true;
                for (var index = 0; index < activeChord.length; index++) 
                    if (correctChord.indexOf(activeChord[index]) < 0) 
                        match = false;
                        break;
                    
                }

                if (match) 
                    runSequence('lock2');
                    currentStep++;
                
            }
            break;
    }

function noteOff(note) 
    switch(currentStep) 
        case 2:
            // Remove the note value from the active chord array
            activeChord.splice(activeChord.indexOf(note), 1);
            break;
    
}

function runSequence(sequence) 
    switch(sequence) 
        // ...

        case 'lock2':
            // code to trigger animations, stop clock, end game
            stopTimer();

            break;
    
}

Now all that’s left to do is to add some additional UI elements and animations and we have ourselves a working game!

Here’s a video of the entire gameplay sequence from start to finish. This is running in Google Chrome. Also shown is a virtual MIDI keyboard to help visualize which notes are currently being played. For a normal breakout room scenario, this can run in full-screen mode and with no other inputs in the room (such as a mouse or computer keyboard) to prevent users from closing the window.

WebMIDI Breakout Game Demo (watch in Youtube)

If you don’t have a physical MIDI device laying around and you still want to try it out, there are a number of virtual MIDI keyboard apps out there that will let you use your computer keyboard as a musical input, such as VMPK. Also, if you’d like to further dissect everything that’s going on here, check out the complete prototype on CodePen.

See the Pen WebMIDI Breakout Room Demo by Peter Anglea (@peteranglea) on CodePen.

Conclusion

MIDI.org says that “Web MIDI has the potential to be one of the most disruptive music [technologies] in a long time, maybe as disruptive as MIDI was originally back in 1983.” That’s a tall order and some seriously high praise.

I hope this article and sample app has gotten you excited about the potential that this API has to spur the development of new and exciting kinds of browser-based music applications. In the coming years, hopefully, we’ll start to see more online music notation software, digital audio workstations, audio visualizers, instrument tutorials, and more.

If you want to read more about Web MIDI and its capabilities, I recommend the following:

And for further inspiration, here are some other examples of the Web MIDI API in action:

Smashing Editorial
(rb, ra, hj, il)

Read More:

Getting Started With The Web MIDI API

Thumbnail

Web Development Reading List #169: TLS At Scale, Brotli Benefits, And Easy Onion Deployments

Everyone here can have a big impact on a project, on someone else. I get very excited about this when I read stories like the one about an intern at Google who did an experiment that saves tons of traffic, or when I get an email from one of my readers who now published an awesome complete beginner’s guide to front-end development.
We need to recognize that our industry depends on people who share their open-source code and we should support them and their projects that we heavily rely on.

View original post here: 

Web Development Reading List #169: TLS At Scale, Brotli Benefits, And Easy Onion Deployments

Thumbnail

Using The Gamepad API In Web Games


The Gamepad API is a relatively new piece of technology that allows us to access the state of connected gamepads using JavaScript, which is great news for HTML5 game developers.

Using The Gamepad API In Web Games

A lot of game genres, such as racing and platform fighting games, rely on a gamepad rather than a keyboard and mouse for the best experience. This means these games can now be played on the web with the same gamepads that are used for consoles. A demo is available, and if you don’t have a gamepad, you can still enjoy the demo using a keyboard.

The post Using The Gamepad API In Web Games appeared first on Smashing Magazine.

See original article – 

Using The Gamepad API In Web Games

101 Elements Of A Complete Product Page

Is there a concept like ‘a complete product page’?

Chances are if you have ever found yourself on a product page you have figured out the basic elements:

  • The Headline
  • The Product Image
  • The Product Specifications
  • Pricing
  • The Call to Action buttons.
  • The Payment methods.

Shouldn’t that be enough to make a sale? The user lands on your product page, a self explanatory title to the product he wants finds him, he reads the specifications (color, size, material, make, model, related features), after a glance he starts to look around for the payment methods. He likes it, presses the CTA button and bam! Sold!  Works like the good old brick and mortar stores, or not?

The better question is; Is there something like complete shopping experience?

The answer is ‘Yes’.

That’s precisely what persuades them to press the CTA button.
Family Guy ; Do Not Press The Button

Put yourself in the customer’s shoes..

  • You get into a retail store to buy pasta, you are greeted by the nice security guy at the door.The store manager smiles at you. You are pointed to the right shelf.
  • You scan through the variety of pasta (Spaghetti, Fusilli, Penne and Farfalle in tempting packaging). One has a free dip to go with it, you take it.
  • On the next shelf you find some dried rosemary, “Why not make it an exotic recipe?” you add it to your cart.
  • Now, you are looking for your preferred brand of ketchup, the staff member arranging goods on the shelves tells you they are out of stock.
  • A lady, another customer, exchanges greetings, casually mentions she loves the Tabasco and the Sriracha from a particular label. You take a bottle each.
  • The sign boards take you to the cash counter.
  • The lady at the cash counter wears a reassuring smile. She suggests you buy the fresh herb instead of the dried rosemary and offers to get it quick for you, you oblige.
  • A little guilt for overspending creeps in, you cancel one of the exotic sauces “I don’t need Sriracaha.”. The friendly lady at the the counter smiles and excludes it.

In analogy, your product page is the retail store. The friendly security guy , the store manager , the staff member, the options, the distractions ,the freebies, the branding ,the other customer, the sign boards, discount coupons, the reassuring lady at the cash counter who cares about your recipe enough to add fresh herbs to it are all product page elements.

Why would you press that button or make a purchase without the complete experience online?

The curious case of Benjamin (pressing the conversion) button. Tweet: 101 Elements of a Complete Product Page. Read more at https://vwo.com/blog/101-elements-product-page/

Persuasion: The Reason Your User Will Press the Button

Subtle and not so subtle psychological factors are at play when persuading people to buy. Cialdini’s six principles of influence govern the product page elements as well. Here is a classification of the functional product page elements listed down for your convenience.

Reciprocity (It’s a Give and Take)

In simple terms tell your consumers you care and they’ll care to buy from you.

‘Hey, we want to save you some money, here’s the coupon for this product in your cart.’

‘If you want to talk we have a discussion board.’

Live chats and availability pop ups make your eCommerce site more interactive and human. Who doesn’t like a considerate seller?

The eager to help staff member at the mart and the lady at the counter know this secret. They are doing their job well by being helpful and responsive.

  1. Add – Ons
  2. Shipping Information
  3. Show Speed Of Results
  4. Industry Feedback
  5. Tools For Rating Reviews
  6. Notify When This Item Becomes Available
  7. Live Chat
  8. Flag Item
  9. Contact Us Link
  10. FAQs
  11. Feedback
  12. Benefits/ Freebies
  13. Discount
  14. Sorting Feature
  15. Store Finder
  16. Track Orders
  17. Email
  18. DataSheet, Brochure Or Manual
  19. Coupon Code Box
  20. Audio
  21. Discussion board
  22. Availability (In stock or out of stock)
  23. Return Policy
  24. Privacy Policy
  25. Search Feature

Related Post: How Badly Does Your Online Shop Need Live Chat?

Commitment (We are Creatures of Habit)

We want to belong to a common set of values, actions or belief. The consumer feels a sense of ownership when he sees ‘My Account’, ‘My shopping history’ mentioned on the product page. A history or an account is his investment into the website and hence a commitment. This commitment has to be reinforced with warranties and insurances under applicable conditions. Remember, if there is a store you visit often you are more likely to buy from them.

  1. Usual Payment methods
  2. Bookmarks
  3. Wishlists
  4. User Account Login
  5. Shopping (Buying) History
  6. Suggestions Based On Your Shopping (Buying) History
  7. Opt-in Form Or Subscription Form
  8. Guarantee
  9. Add this to cart
  10. Terms Of Service Agreement
  11. Insurance
  12. Credited points / Regular customer points
  13. Links to E-wallets/ Bitcoins

If there is a store you visit often you are more likely to buy from them. Tweet: 101 Elements of a Complete Product Page. Read more at https://vwo.com/blog/101-elements-product-page/

Social Proof (Since Everyone I Know is Doing It)

People Looking in the pointed direction ( Social Proof )

82% of consumers trust a company more if they are involved with social media. Belonging comes with acceptance. After commitment the human tendency is to look for validation. Validation on social and eCommerce sites comes with increased trust. If multiple users give rave reviews about an enlisted product people are more likely to consider buying it. Here other elements may include social share buttons which allow people to share and take an opinion on the enlistments they are interested in. That other lady at the sauce shelf shopping for the exotic sauces is the retail store’s social proof without even knowing it.

The page elements to influence by Social Proof are listed here:

    1. Graphs And Charts
    2. Citations and References
    3. Testimonials
    4. Industry Accreditation
    5. Experience
    6. Proof Of Working
    7. Track Record
    8. Proof Of Any Claim Made
    9. Photos And Videos Of The Product In Use
    10. Product Ratings
    11. Product Reviews (and/or Comments)
    12. Item Followers
    13. Trustmarks
    14. Statistics
    15. Seller Rating
    16. Follow seller
    17. Seller Testimonials
    18. “What’s Hot Now” or “What Is Popular Now”
    19. Survey
    20. Approval By Other Organizations
    21. From the makers/author
    22. Social Sharing buttons

Related PostVWO eCommerce Survey 2014: What Makes Shoppers Buy

Authority (We Like being Led)

Authority doesn’t mean you command your users to buy enlisted wares. It means that you create an awe around your products or your brand. How to do that? Has the enlisted product been endorsed by an ambassador? Was the product in news recently? Has it won any kind of recognition or awards? If so mention it, the product is more likely to sell; there’s a halo around it. The same applies to your eCommerce portal/brand name. If you have it, flaunt it!

  1. Formal Expertise
  2. News
  3. Tech Specs with special features
  4. Audio Visual advertisements
  5. Product Endorsement Links
  6. Media Coverage
  7. Brand certification

Authority puts a halo on the product, one must trust what wears a halo. Tweet: 101 Elements of a Complete Product Page. Read more at https://vwo.com/blog/101-elements-product-page/

Likability (Like It…Will Take It!)

Liking makes a strong positive bias. This is not just acceptance this an out and out affirmation of your brand. Liking is an all-encompassing factor. It includes the UX, UI , and product presentations. It also means crazy copywriting that could lure the more adventurous buyers into visiting your website often, thus turning them into the creatures of habit who get committed to buying from you. It could be the underrated convenience that comes with the user interface or the overrated graphics, slides or product videos.

We are not going overboard with the liking factor, Heineken is selling you beer using a ‘pleasantly smiling’ typeface, ever heard of that?

Related PostThe Why And How of Creating ‘Snackable’ Content

Include these product elements to be more likable:

  1. Product Details Or Specifications
  2. Size Information
  3. Color Options
  4. Product Tags
  5. Awards
  6. 360 Degree Views Of Products (Photos And Videos)
  7. Photos And Videos In Different Situations
  8. Step by step Explanation Of Usage Of Product – Photos And Videos
  9. Photos And Videos Of The Product When It Is Working
  10. Sorting Options For Reviews
  11. Similar Items
  12. Options For Gifting This To Someone Else
  13. Units Converter
  14. Social Sharing
  15. Differentiation
  16. Ability To List Products By Different Criteria
  17. Blogs
  18. Certifications
  19. ‘If You Bought This You May Like’ (Cross-selling)
  20. Recently viewed products
  21. Product Description
  22. Tools To Zoom In On The Product
  23. Bundling(Customized looks)
  24. Breadcrumbs
  25. Free Shipping/Benefits.

Scarcity (It’s a Tease)

eCommerce Store Screenshot - Scarcity Tactic

Multiple marketing campaigns promote limited editions to up their sales. The moment you tell your buyers that there are only a few of them left, there is an urge to click that button before anyone else does. ‘We are not telling you to buy this, we are just saying it’s now or never’. Then look at them go for it. But be sure not to create a false sense of urgency, that’s going to hurt your credibility in the longer run.

‘We are not telling you to buy this, we are just saying that it’s now or never.’ Tweet: 101 Elements of a Complete Product Page. Read more at https://vwo.com/blog/101-elements-product-page/

  1. Date Added
  2. Spares
  3. Urgency
  4. Discount Timers
  5. Last date of availability
  6. Best deals
  7. Pitch
  8. Must haves List
  9. Best Sellers List

Related PostHow to Use Urgency and Scarcity Principles to Increase eCommerce Sales

Here’s a checklist you would want to pin to your dashboards, we haven’t added any timers .

Get the PDF here file icon

When you are done adding the elements, don’t forget to test them! Comment if you think we missed any product page elements, we are happy to improvise.

The post 101 Elements Of A Complete Product Page appeared first on VWO Blog.

Visit source: 

101 Elements Of A Complete Product Page

101 Elements Of A Complete eCommerce Product Page (With Downloadable PDF)

Is there a concept like ‘a complete product page’?

Chances are if you have ever found yourself on a product page you have figured out the basic elements:

  • The Headline
  • The Product Image
  • The Product Specifications
  • Pricing
  • The Call to Action buttons
  • The Payment methods

Shouldn’t that be enough to make a sale? The user lands on your product page, a self explanatory title to the product he wants finds him, he reads the specifications (color, size, material, make, model, related features), after a glance he starts to look around for the payment methods. He likes it, presses the CTA button and bam! Sold! Works like the good old brick and mortar stores, or not?

The better question is; Is there something like complete shopping experience?

The answer is ‘Yes’. There are 101 elements to put together on a product page to complete that experience. If you are one of the lazy lot like most, there’s a quick checklist to save at the bottom of this page.

To know how these elements work, stay with us..

Family Guy ; Do Not Press The Button

Put yourself in the customer’s shoes..

  • You get into a retail store to buy pasta, you are greeted by the nice security guy at the door. The store manager smiles at you. You are pointed to the right shelf.
  • You scan through the variety of pasta (Spaghetti, Fusilli, Penne and Farfalle in tempting packaging). One has a free dip to go with it, you take it.
  • On the next shelf you find some dried rosemary, “Why not make it an exotic recipe?” you add it to your cart.
  • Now, you are looking for your preferred brand of ketchup, the staff member arranging goods on the shelves tells you they are out of stock.
  • A lady, another customer, exchanges greetings, casually mentions she loves the Tabasco and the Sriracha from a particular label. You take a bottle each.
  • The sign boards take you to the cash counter.
  • The lady at the cash counter wears a reassuring smile. She suggests you buy the fresh herb instead of the dried rosemary and offers to get it quick for you, you oblige.
  • A little guilt for overspending creeps in, you cancel one of the exotic sauces “I don’t need Sriracha!”. The friendly lady at the the counter smiles and excludes it.

In analogy, your product page is the retail store. The friendly security guy, the store manager, the staff member, the options, the distractions, the freebies, the branding, the other customer, the sign boards, discount coupons, the reassuring lady at the cash counter who cares about your recipe enough to add fresh herbs to it are all product page elements.

Why would you press that button or make a purchase without the complete experience online?

The curious case of Benjamin (pressing the conversion) button. Tweet: 101 Elements of a Complete Product Page. Read more at https://vwo.com/blog/101-elements-product-page/

Persuasion: The Reason Your User Will Press the Button

Subtle and not so subtle psychological factors are at play when persuading people to buy. Cialdini’s six principles of influence govern the product page elements as well. Here is a classification of the functional product page elements listed down for your convenience.

Reciprocity (It’s a Give and Take)

In simple terms tell your consumers you care and they’ll care to buy from you.

‘Hey, we want to save you some money, here’s the coupon for this product in your cart.’

‘If you want to talk we have a discussion board.’

Live chats and availability pop ups make your eCommerce site more interactive and human. Who doesn’t like a considerate seller?

The eager to help staff member at the mart and the lady at the counter know this secret. They are doing their job well by being helpful and responsive.

  1. Add – Ons
  2. Shipping Information
  3. Show Speed Of Results
  4. Industry Feedback
  5. Tools For Rating Reviews
  6. Notify When This Item Becomes Available
  7. Live Chat
  8. Flag Item
  9. Contact Us Link
  10. FAQs
  11. Feedback
  12. Benefits/ Freebies
  13. Discount
  14. Sorting Feature
  15. Store Finder
  16. Track Orders
  17. Email
  18. DataSheet, Brochure Or Manual
  19. Coupon Code Box
  20. Audio
  21. Discussion board
  22. Availability (In stock or out of stock)
  23. Return Policy
  24. Privacy Policy
  25. Search Feature

Related Post: How Badly Does Your Online Shop Need Live Chat?

Commitment (We are Creatures of Habit)

We want to belong to a common set of values, actions or belief. The consumer feels a sense of ownership when he sees ‘My Account’, ‘My shopping history’ mentioned on the product page. A history or an account is his investment into the website and hence a commitment. This commitment has to be reinforced with warranties and insurances under applicable conditions. Remember, if there is a store you visit often you are more likely to buy from them.

  1. Usual Payment methods
  2. Bookmarks
  3. Wishlists
  4. User Account Login
  5. Shopping (Buying) History
  6. Suggestions Based On Your Shopping (Buying) History
  7. Opt-in Form Or Subscription Form
  8. Guarantee
  9. Add this to cart
  10. Terms Of Service Agreement
  11. Insurance
  12. Credited points / Regular customer points
  13. Links to E-wallets/ Bitcoins

If there is a store you visit often you are more likely to buy from them. Tweet: 101 Elements of a Complete Product Page. Read more at https://vwo.com/blog/101-elements-product-page/

Social Proof (Since Everyone I Know is Doing It)

People Looking in the pointed direction ( Social Proof )

82% of consumers trust a company more if they are involved with social media. Belonging comes with acceptance. After commitment the human tendency is to look for validation. Validation on social and eCommerce sites comes with increased trust. If multiple users give rave reviews about an enlisted product people are more likely to consider buying it. Here other elements may include social share buttons which allow people to share and take an opinion on the enlistments they are interested in. That other lady at the sauce shelf shopping for the exotic sauces is the retail store’s social proof without even knowing it.

The page elements to influence by Social Proof are listed here:

    1. Graphs And Charts
    2. Citations and References
    3. Testimonials
    4. Industry Accreditation
    5. Experience
    6. Proof Of Working
    7. Track Record
    8. Proof Of Any Claim Made
    9. Photos And Videos Of The Product In Use
    10. Product Ratings
    11. Product Reviews (and/or Comments)
    12. Item Followers
    13. Trustmarks
    14. Statistics
    15. Seller Rating
    16. Follow seller
    17. Seller Testimonials
    18. “What’s Hot Now” or “What Is Popular Now”
    19. Survey
    20. Approval By Other Organizations
    21. From the makers/author
    22. Social Sharing buttons

Related PostVWO eCommerce Survey 2014: What Makes Shoppers Buy

Authority (We Like being Led)

Authority doesn’t mean you command your users to buy enlisted wares. It means that you create an awe around your products or your brand. How to do that? Has the enlisted product been endorsed by an ambassador? Was the product in news recently? Has it won any kind of recognition or awards? If so mention it, the product is more likely to sell; there’s a halo around it. The same applies to your eCommerce portal/brand name. If you have it, flaunt it!

  1. Formal Expertise
  2. News
  3. Tech Specs with special features
  4. Audio Visual advertisements
  5. Product Endorsement Links
  6. Media Coverage
  7. Brand certification

Authority puts a halo on the product, one must trust what wears a halo. Tweet: 101 Elements of a Complete Product Page. Read more at https://vwo.com/blog/101-elements-product-page/

Likability (Like It…Will Take It!)

Liking makes a strong positive bias. This is not just acceptance this an out and out affirmation of your brand. Liking is an all-encompassing factor. It includes the UX, UI , and product presentations. It also means crazy copywriting that could lure the more adventurous buyers into visiting your website often, thus turning them into the creatures of habit who get committed to buying from you. It could be the underrated convenience that comes with the user interface or the overrated graphics, slides or product videos.

We are not going overboard with the liking factor, Heineken is selling you beer using a ‘pleasantly smiling’ typeface, ever heard of that?

Related PostThe Why And How of Creating ‘Snackable’ Content

Include these product elements to be more likable:

  1. Product Details Or Specifications
  2. Size Information
  3. Color Options
  4. Product Tags
  5. Awards
  6. 360 Degree Views Of Products (Photos And Videos)
  7. Photos And Videos In Different Situations
  8. Step by step Explanation Of Usage Of Product – Photos And Videos
  9. Photos And Videos Of The Product When It Is Working
  10. Sorting Options For Reviews
  11. Similar Items
  12. Options For Gifting This To Someone Else
  13. Units Converter
  14. Social Sharing
  15. Differentiation
  16. Ability To List Products By Different Criteria
  17. Blogs
  18. Certifications
  19. ‘If You Bought This You May Like’ (Cross-selling)
  20. Recently viewed products
  21. Product Description
  22. Tools To Zoom In On The Product
  23. Bundling(Customized looks)
  24. Breadcrumbs
  25. Free Shipping/Benefits

Scarcity (It’s a Tease)

eCommerce Store Screenshot - Scarcity Tactic

Multiple marketing campaigns promote limited editions to up their sales. The moment you tell your buyers that there are only a few of them left, there is an urge to click that button before anyone else does. ‘We are not telling you to buy this, we are just saying it’s now or never’. Then look at them go for it. But be sure not to create a false sense of urgency, that’s going to hurt your credibility in the longer run.

‘We are not telling you to buy this, we are just saying that it’s now or never.’ Tweet: 101 Elements of a Complete Product Page. Read more at https://vwo.com/blog/101-elements-product-page/

  1. Date Added
  2. Spares
  3. Urgency
  4. Discount Timers
  5. Last date of availability
  6. Best deals
  7. Pitch
  8. Must haves List
  9. Best Sellers List

Related PostHow to Use Urgency and Scarcity Principles to Increase eCommerce Sales

Here’s a checklist you would want to pin to your dashboards, inspired by the answer to one of he most asked questions in Quora in the eCommerce category,we haven’t added any timers but be quck to download it.

Get the PDF here file icon

When you are done adding the elements, don’t forget to test them! Comment if you think we missed any product page elements, we are happy to improvise.

The post 101 Elements Of A Complete eCommerce Product Page (With Downloadable PDF) appeared first on VWO Blog.

See more here – 

101 Elements Of A Complete eCommerce Product Page (With Downloadable PDF)

Laying The Groundwork For Extensibility

The Web has succeeded at interoperability and scale in a way that no other technology has before or since. Still, the Web remains far from “state of the art”, and it is being increasingly threatened by walled gardens. The Web platform often lags competitors in delivering new system and device capabilities to developers. Worse, it often hobbles new capabilities behind either high- or low-level APIs, forcing painful choices (and workarounds) on developers.

Link:

Laying The Groundwork For Extensibility