Tag Archives: navigation

Thumbnail

How to Calculate Your Landing Page Conversion Rate (And Increase It)

landing-page-conversion-rate-introduction

A landing page represents an opportunity. Your prospect or lead will either take advantage of it or they won’t. The landing page conversion rate tells you how well you’re doing. Some websites have one or two landing pages, while others have dozens. It all depends on how many products or services you sell and what referral source sends you the most traffic. For instance, if you get tons of traffic from Facebook and Instagram Ads, you might have a landing page for each of those referral sources. You’ll want to optimize each page to reflect the visual aesthetic and copy…

The post How to Calculate Your Landing Page Conversion Rate (And Increase It) appeared first on The Daily Egg.

Excerpt from:  

How to Calculate Your Landing Page Conversion Rate (And Increase It)

Thumbnail

Suffering From Analysis Paralysis? You Should See An Optimization Specialist

crazy egg analysis tips

Have you ever faced down a giant table or spreadsheet of data and thought, “I have no idea what to do with this”? As marketers we’ve all probably had those deer-in-the-headlights moments once or twice, where we’ve floundered to figure out what the hell we’re looking at. Crazy Egg was built on the premise of simplicity and ease of use, for those that I fondly like to call “Google Analytics-averse” – but there’s always room for improvement when it comes to helping folks switch from analysis to action mode. Whether you’re a UX designer, small business owner, SEO expert or…

The post Suffering From Analysis Paralysis? You Should See An Optimization Specialist appeared first on The Daily Egg.

Link: 

Suffering From Analysis Paralysis? You Should See An Optimization Specialist

Thumbnail

The Most Effective Ecommerce Lead Generation Tips and Strategies

ecommerce-lead-generation

I have some bad news for you. It might hurt. Everything you’ve read about lead generation strategies might not apply to your business. Why? Because ecommerce lead generation is different. If you run a business outside the ecommerce family, feel free to check out another Crazy Egg article that applies to your company. For those of you in the ecommerce market, though, we need to set a few things straight. I’m going to share with you my best tips for effective ecommerce lead generation, and you might notice that they’re not the same as the tactics you might use for,…

The post The Most Effective Ecommerce Lead Generation Tips and Strategies appeared first on The Daily Egg.

Follow this link: 

The Most Effective Ecommerce Lead Generation Tips and Strategies

Thumbnail

Building Mobile Apps Using React Native And WordPress




Building Mobile Apps Using React Native And WordPress

Muhammad Muhsin



As web developers, you might have thought that mobile app development calls for a fresh learning curve with another programming language. Perhaps Java and Swift need to be added to your skill set to hit the ground running with both iOS and Android, and that might bog you down.

But this article has you in for a surprise! We will look at building an e-commerce application for iOS and Android using the WooCommerce platform as our backend. This would be an ideal starting point for anyone willing to get into native cross-platform development.

A Brief History Of Cross-Platform Development

It’s 2011, and we see the beginning of hybrid mobile app development. Frameworks like Apache Cordova, PhoneGap, and Ionic Framework slowly emerge. Everything looks good, and web developers are eagerly coding away mobile apps with their existing knowledge.

However, mobile apps still looked like mobile versions of websites. No native designs like Android’s material design or iOS’s flat look. Navigation worked similar to the web and transitions were not buttery smooth. Users were not satisfied with apps built using the hybrid approach and dreamt of the native experience.

Fast forward to March 2015, and React Native appears on the scene. Developers are able to build truly native cross-platform applications using React, a favorite JavaScript library for many developers. They are now easily able to learn a small library on top of what they know with JavaScript. With this knowledge, developers are now targeting the web, iOS and Android.

Furthermore, changes done to the code during development are loaded onto the testing devices almost instantly! This used to take several minutes when we had native development through other approaches. Developers are able to enjoy the instant feedback they used to love with web development.

React developers are more than happy to be able to use existing patterns they have followed into a new platform altogether. In fact, they are targeting two more platforms with what they already know very well.

This is all good for front-end development. But what choices do we have for back-end technology? Do we still have to learn a new language or framework?

The WordPress REST API

In late 2016, WordPress released the much awaited REST API to its core, and opened the doors for solutions with decoupled backends.

So, if you already have a WordPress and WooCommerce website and wish to retain exactly the same offerings and user profiles across your website and native app, this article is for you!

Assumptions Made In This Article

I will walk you through using your WordPress skill to build a mobile app with a WooCommerce store using React Native. The article assumes:

  • You are familiar with the different WordPress APIs, at least at a beginner level.
  • You are familiar with the basics of React.
  • You have a WordPress development server ready. I use Ubuntu with Apache.
  • You have an Android or an iOS device to test with Expo.

What We Will Build In This Tutorial

The project we are going to build through this article is a fashion store app. The app will have the following functionalities:

  • Shop page listing all products,
  • Single product page with details of the selected item,
  • ‘Add to cart’ feature,
  • ‘Show items in cart’ feature,
  • ‘Remove item from cart’ feature.

This article aims to inspire you to use this project as a starting point to build complex mobile apps using React Native.

Note: For the full application, you can visit my project on Github and clone it.

Getting Started With Our Project

We will begin building the app as per the official React Native documentation. Having installed Node on your development environment, open up the command prompt and type in the following command to install the Create React Native App globally.

npm install -g create-react-native-app

Next, we can create our project

create-react-native-app react-native-woocommerce-store

This will create a new React Native project which we can test with Expo.

Next, we will need to install the Expo app on our mobile device which we want to test. It is available for both iOS and Android.

On having installed the Expo app, we can run npm start on our development machine.

cd react-native-woocommerce-store

npm start


Starting a React Native project through the command line via Expo. (Large preview)

After that, you can scan the QR code through the Expo app or enter the given URL in the app’s search bar. This will run the basic ‘Hello World’ app in the mobile. We can now edit App.js to make instant changes to the app running on the phone.

Alternatively, you can run the app on an emulator. But for brevity and accuracy, we will cover running it on an actual device.

Next, let’s install all the required packages for the app using this command:

npm install -s axios react-native-htmlview react-navigation react-redux redux redux-thunk

Setting Up A WordPress Site

Since this article is about creating a React Native app, we will not go into details about creating a WordPress site. Please refer to this article on how to install WordPress on Ubuntu. As WooCommerce REST API requires HTTPS, please make sure it is set up using Let’s Encrypt. Please refer to this article for a how-to guide.

We are not creating a WordPress installation on localhost since we will be running the app on a mobile device, and also since HTTPS is needed.

Once WordPress and HTTPS are successfully set up, we can install the WooCommerce plugin on the site.


Installing the WooCommerce plugin to our WordPress installation. (Large preview)

After installing and activating the plugin, continue with the WooCommerce store setup by following the wizard. After the wizard is complete, click on ‘Return to dashboard.’

You will be greeted by another prompt.


Adding example products to WooCommerce. (Large preview)

Click on ‘Let’s go‘ to ‘Add example products’. This will save us the time to create our own products to display in the app.

Constants File

To load our store’s products from the WooCommerce REST API, we need the relevant keys in place inside our app. For this purpose, we can have a constans.js file.

First create a folder called ‘src’ and create subfolders inside as follows:


Create the file ‘Constants.js’ within the constans folder. (Large preview)

Now, let’s generate the keys for WooCommerce. In the WordPress dashboard, navigate to WooCommerce → Settings → API → Keys/Apps and click on ‘Add Key.’

Next create a Read Only key with name React Native. Copy over the Consumer Key and Consumer Secret to the constants.js file as follows:

const Constants = 
   URL: 
wc: 'https://woocommerce-store.on-its-way.com/wp-json/wc/v2/'
   ,
   Keys: 
ConsumerKey: 'CONSUMER_KEY_HERE',
ConsumerSecret: 'CONSUMER_SECRET_HERE'
   
}
export default Constants;

Starting With React Navigation

React Navigation is a community solution to navigating between the different screens and is a standalone library. It allows developers to set up the screens of the React Native app with just a few lines of code.

There are different navigation methods within React Navigation:

  • Stack,
  • Switch,
  • Tabs,
  • Drawer,
  • and more.

For our Application we will use a combination of StackNavigation and DrawerNavigation to navigate between the different screens. StackNavigation is similar to how browser history works on the web. We are using this since it provides an interface for the header and the header navigation icons. It has push and pop similar to stacks in data structures. Push means we add a new screen to the top of the Navigation Stack. Pop removes a screen from the stack.

The code shows that the StackNavigation, in fact, houses the DrawerNavigation within itself. It also takes properties for the header style and header buttons. We are placing the navigation drawer button to the left and the shopping cart button to the right. The drawer button switches the drawer on and off whereas the cart button takes the user to the shopping cart screen.

const StackNavigation = StackNavigator(
 DrawerNavigation:  screen: DrawerNavigation 
}, 
   headerMode: 'float',
   navigationOptions: ( navigation, screenProps ) => (
     headerStyle:  backgroundColor: '#4C3E54' ,
     headerTintColor: 'white',
     headerLeft: drawerButton(navigation),
     headerRight: cartButton(navigation, screenProps)
   })
 });

const drawerButton = (navigation) => (
 <Text
   style= padding: 15, color: 'white' }
   onPress=() => 
     if (navigation.state.index === 0) 
       navigation.navigate('DrawerOpen')
      else 
       navigation.navigate('DrawerClose')
     
   }
   }> (
 <Text style= padding: 15, color: 'white' }
   onPress=() =>  navigation.navigate('CartPage') }
 >
   <EvilIcons name="cart" size=30 />
   screenProps.cartCount
 </Text>
);

DrawerNavigation on the other hands provides for the side drawer which will allow us to navigate between Home, Shop, and Cart. The DrawerNavigator lists the different screens that the user can visit, namely Home page, Products page, Product page, and Cart page. It also has a property which will take the Drawer container: the sliding menu which opens up when clicking the hamburger menu.

const DrawerNavigation = DrawerNavigator(
 Home: 
   screen: HomePage,
   navigationOptions: 
     title: "RN WC Store"
   
 },
 Products: 
   screen: Products,
   navigationOptions: 
     title: "Shop"
   
 },
 Product: 
   screen: Product,
   navigationOptions: ( navigation ) => (
     title: navigation.state.params.product.name
   ),
 },
 CartPage: 
   screen: CartPage,
   navigationOptions: 
     title: "Cart"
   
 }
}, 
   contentComponent: DrawerContainer
 );

#


Left: The Home page (homepage.js). Right: The open drawer (DrawerContainer.js).

Injecting The Redux Store To App.js

Since we are using Redux in this app, we have to inject the store into our app. We do this with the help of the Provider component.

const store = configureStore();

class App extends React.Component 
 render() 
   return (
     <Provider store=store>    
       <ConnectedApp />    
     </Provider>    
   )
 }
}

We will then have a ConnectedApp component so that we can have the cart count in the header.

class CA extends React.Component 
 render() 
   const cart = 
     cartCount: this.props.cart.length
   
   return (
     <StackNavigation screenProps=cart />
   );
 }
}

function mapStateToProps(state) 
 return 
   cart: state.cart
 ;
}

const ConnectedApp = connect(mapStateToProps, null)(CA);

Redux Store, Actions, And Reducers

In Redux, we have three different parts:

  1. Store
    Holds the whole state of your entire application. The only way to change state is to dispatch an action to it.
  2. Actions
    A plain object that represents an intention to change the state.
  3. Reducers
    A function that accepts a state and an action type and returns a new state.

These three components of Redux help us achieve a predictable state for the entire app. For simplicity, we will look at how the products are fetched and saved in the Redux store.

First of all, let’s look at the code for creating the store:

let middleware = [thunk];

export default function configureStore() 
    return createStore(
        RootReducer,
        applyMiddleware(...middleware)
    );

Next, the products action is responsible for fetching the products from the remote website.

export function getProducts() 
   return (dispatch) => 
       const url = `$Constants.URL.wcproducts?per_page=100&consumer_key=$Constants.Keys.ConsumerKey&consumer_secret=$Constants.Keys.ConsumerSecret`
      
       return axios.get(url).then(response => 
           dispatch(
               type: types.GET_PRODUCTS_SUCCESS,
               products: response.data
           
       )}).catch(err => 
           console.log(err.error);
       )
   };
}

The products reducer is responsible for returning the payload of data and whether it needs to be modified.

export default function (state = InitialState.products, action) 
    switch (action.type) 
        case types.GET_PRODUCTS_SUCCESS:
            return action.products;
        default:
            return state;
    
}

Displaying The WooCommerce Shop

The products.js file is our Shop page. It basically displays the list of products from WooCommerce.

class ProductsList extends Component 

 componentDidMount() 
   this.props.ProductAction.getProducts(); 
 

 _keyExtractor = (item, index) => item.id;

 render() 
   const  navigate  = this.props.navigation;
   const Items = (
     <FlatList contentContainerStyle=styles.list numColumns=2
       data=this.props.products  
       keyExtractor=this._keyExtractor
       renderItem=
         ( item ) => (
           <TouchableHighlight style= width: '50%' } onPress=() => navigate("Product",  product: item )} underlayColor="white">
             <View style=styles.view >
               <Image style=styles.image source= uri: item.images[0].src } />
               <Text style=styles.text>item.name</Text>
             </View>
           </TouchableHighlight>
         )
       }
     />
   );
   return (
     <ScrollView>
       this.props.products.length ? Items :
         <View style= alignItems: 'center', justifyContent: 'center' }>
           <Image style=styles.loader source=LoadingAnimation />
         </View>
       }
     </ScrollView>
   );
 }
}

this.props.ProductAction.getProducts() and this.props.products are possible because of mapStateToProps and mapDispatchToProps.


Products listing screen. (Large preview)

mapStateToProps and mapDispatchToProps

State is the Redux store and Dispatch is the actions we fire. Both of these will be exposed as props in the component.

function mapStateToProps(state) 
 return 
   products: state.products
 ;
}
function mapDispatchToProps(dispatch) 
 return 
   ProductAction: bindActionCreators(ProductAction, dispatch)
 ;
}
export default connect(mapStateToProps, mapDispatchToProps)(ProductsList);

Styles

In React, Native styles are generally defined on the same page. It’s similar to CSS, but we use camelCase properties instead of hyphenated properties.

const styles = StyleSheet.create(
 list: 
   flexDirection: 'column'
 ,
 view: 
   padding: 10
 ,
 loader: 
   width: 200,
   height: 200,
   alignItems: 'center',
   justifyContent: 'center',
 ,
 image: 
   width: 150,
   height: 150
 ,
 text: 
   textAlign: 'center',
   fontSize: 20,
   padding: 5
 
});

Single Product Page

This page contains details of a selected product. It shows the user the name, price, and description of the product. It also has the ‘Add to cart’ function.


Single product page. (Large preview)

Cart Page

This screen shows the list of items in the cart. The action has the functions getCart, addToCart, and removeFromCart. The reducer handles the actions likewise. Identification of actions is done through actionTypes — constants which describe the action that are stored in a separate file.

export const GET_PRODUCTS_SUCCESS = 'GET_PRODUCTS_SUCCESS'
export const GET_PRODUCTS_FAILED = 'GET_PRODUCTS_FAILED';

export const GET_CART_SUCCESS = 'GET_CART_SUCCESS';
export const ADD_TO_CART_SUCCESS = 'ADD_TO_CART_SUCCESS';
export const REMOVE_FROM_CART_SUCCESS = 'REMOVE_FROM_CART_SUCCESS';

This is the code for the CartPage component:

class CartPage extends React.Component 

 componentDidMount() 
   this.props.CartAction.getCart();
 

 _keyExtractor = (item, index) => item.id;

 removeItem(item) 
   this.props.CartAction.removeFromCart(item);
 

 render() 
   const  cart  = this.props;
   console.log('render cart', cart)

   if (cart && cart.length > 0) {
     const Items = <FlatList contentContainerStyle=styles.list
       data=cart
       keyExtractor=this._keyExtractor
       renderItem=( item ) =>
         <View style=styles.lineItem >
           <Image style=styles.image source= uri: item.image } />
           <Text style=styles.text>item.name</Text>
           <Text style=styles.text>item.quantity</Text>
           <TouchableOpacity style= marginLeft: 'auto' } onPress=() => this.removeItem(item)><Entypo name="cross" size=30 /></TouchableOpacity>
         </View>
       }
     />;
     return (
       <View style=styles.container>
         Items
       </View>
     )
   } else {
     return (
       <View style=styles.container>
         <Text>Cart is empty!</Text>
       </View>
     )
   }
 }
}

As you can see, we are using a FlatList to iterate through the cart items. It takes in an array and creates a list of items to be displayed on the screen.


#


Left: The cart page when it has items in it. Right: The cart page when it is empty.

Conclusion

You can configure information about the app such as name and icon in the app.json file. The app can be published after npm installing exp.

To sum up:

  • We now have a decent e-commerce application with React Native;
  • Expo can be used to run the project on a smartphone;
  • Existing backend technologies such as WordPress can be used;
  • Redux can be used for managing the state of the entire app;
  • Web developers, especially React developers can leverage this knowledge to build bigger apps.

For the full application, you can visit my project on Github and clone it. Feel free to fork it and improve it further. As an exercise, you can continue building more features into the project such as:

  • Checkout page,
  • Authentication,
  • Storing the cart data in AsyncStorage so that closing the app does not clear the cart.
Smashing Editorial
(da, lf, ra, yk, il)


More here: 

Building Mobile Apps Using React Native And WordPress

Lazy Loading JavaScript Modules With ConditionerJS

Linking JavaScript functionality to the DOM can be a repetitive and tedious task. You add a class to an element, find all the elements on the page, and attach the matching JavaScript functionality to the element. Conditioner is here to not only take this work of your hands but supercharge it as well!

In this article, we’ll look at the JavaScript initialization logic that is often used to link UI components to a webpage. Step-by-step we’ll improve this logic, and finally, we’ll make a 1 Kilobyte jump to replacing it with Conditioner. Then we’ll explore some practical examples and code snippets and see how Conditioner can help make our websites more flexible and user-oriented.

Conditioner And Progressive Enhancement Sitting In A Tree

Before we proceed, I need to get one thing across:

Conditioner is not a framework for building web apps.

Instead, it’s aimed at websites. The distinction between websites and web apps is useful for the continuation of this story. Let me explain how I view the overall difference between the two.

Websites are mostly created from a content viewpoint; they are there to present content to the user. The HTML is written to semantically describe the content. CSS is added to nicely present the content across multiple viewports. The last and third act is to carefully layer JavaScript on top to add that extra zing to the user experience. Think of a date picker, navigation, scroll animations, or carousels (pardon my French).

Examples of content-oriented websites are for instance: Wikipedia, Smashing Magazine, your local municipality website, newspapers, and webshops. Web apps are often found in the utility area, think of web-based email clients and online maps. While also presenting content, the focus of web apps is often more on interacting with content than presenting content. There’s a huge grey area between the two, but this contrast will help us decide when Conditioner might be effective and when we should steer clear.

As stated earlier, Conditioner is all about websites, and it’s specifically built to deal with that third act:

Enhancing the presentation layer with JavaScript functionality to offer an improved user experience.

The Troublesome Third Act

The third act is about enhancing the user experience with that zingy JavaScript layer.

Judging from experience and what I’ve seen online, JavaScript functionality is often added to websites like this:

  1. A class is added to an HTML element.
  2. The querySelectorAll method is used to get all elements assigned the class.
  3. A for-loop traverses the NodeList returned in step 2.
  4. A JavaScript function is called for each item in the list.

Let’s quickly put this workflow in code by adding autocomplete functionality to an input field. We’ll create a file called autocomplete.js and add it to the page using a <script> tag.

function createAutocomplete(element) 
  // our autocomplete logic
  // ...
<input type="text" class="autocomplete"/>

<script src="autocomplete.js"></script>

<script>
var inputs = document.querySelectorAll('.autocomplete');

for (var i = 0; i < inputs.length; i++) 
  createAutocomplete(inputs[i]);

</script>

Go to demo →

That’s our starting point.

Suppose we’re now told to add another functionality to the page, say a date picker, it’s initialization will most likely follow the same pattern. Now we’ve got two for-loops. Add another functionality, and you’ve got three, and so on and so on. Not the best.

While this works and keeps you off the street, it creates a host of problems. We’ll have to add a loop to our initialization script for each functionality we add. For each loop we add, the initialization script gets linked ever tighter to the document structure of our website. Often the initialization script will be loaded on each page. Meaning all the querySelectorAll calls for all the different functionalities will be run on each and every page whether functionality is defined on the page or not.

For me, this setup never felt quite right. It always started out “okay,” but then it would slowly grow to a long list of repetitive for-loops. Depending on the project it might contain some conditional logic here and there to determine if something loads on a certain viewport or not.

if (window.innerWidth <= 480) 
  // small viewport for-loops here

Eventually, my initialization script would always grow out of control and turn into a giant pile of spaghetti code that I would not wish on anyone.

Something needed to be done.

Soul Searching

I am a huge proponent of carefully separating the three web dev layers HTML, CSS, and JavaScript. HTML shouldn’t have a rigid relationship with JavaScript, so no use of inline onclick attributes. The same goes for CSS, so no inline style attributes. Adding classes to HTML elements and then later searching for them in my beloved for-loops followed that philosophy nicely.

That stack of spaghetti loops though, I wanted to get rid them so badly.

I remember stumbling upon an article about using data attributes instead of classes, and how those could be used to link up JavaScript functionality (I’m not sure it was this article, but it seems to be from right timeframe). I didn’t like it, misunderstood it, and my initial thought was that this was just covering up for onclick, this mixed HTML and JavaScript, no way I was going to be lured to the dark side, I don’t want anything to do with it. Close tab.

Some weeks later I would return to this and found that linking JavaScript functionality using data attributes was still in line with having separate layers for HTML and JavaScript. As it turned out, the author of the article handed me a solution to my ever-growing initialization problem.

We’ll quickly update our script to use data attributes instead of classes.

<input type="text" data-module="autocomplete">

<script src="autocomplete.js"></script>

<script>
var inputs = document.querySelectorAll('[data-module=autocomplete]');

for (var i = 0; i < inputs.length; i++) 
  createAutocomplete(inputs[i]);

</script>

Go to demo →

Done!

But hang on, this is nearly the same setup; we’ve only replaced .autocomplete with [data-module=autocomplete]. How’s that any better? It’s not, you’re right. If we add an additional functionality to the page, we still have to duplicate our for-loop — blast! Don’t be sad though as this is the stepping stone to our killer for-loop.

Watch what happens when we make a couple of adjustments.

<input type="text" data-module="createAutocomplete">

<script src="autocomplete.js"></script>

<script>
var elements = document.querySelectorAll('[data-module]');

for (var i = 0; i < elements.length; i++) 
    var name = elements[i].getAttribute('data-module');
    var factory = window[name];
    factory(elements[i]);

</script>

Go to demo →

Now we can load any functionality with a single for-loop.

  1. Find all elements on the page with a data-module attribute;
  2. Loop over the node list;
  3. Get the name of the module from the data-module attribute;
  4. Store a reference to the JavaScript function in factory;
  5. Call the factory JavaScript function and pass the element.

Since we’ve now made the name of the module dynamic, we no longer have to add any additional initialization loops to our script. This is all we need to link any JavaScript functionality to an HTML element.

This basic setup has some other advantages as well:

  • The init script no longer needs to know what it loads; it just needs to be very good at this one little trick.
  • There’s now a convention for linking functionality to the DOM; this makes it very easy to tell which parts of the HTML will be enhanced with JavaScript.
  • The init script does not search for modules that are not there, i.e. no wasted DOM searches.
  • The init script is done. No more adjustments are needed. When we add functionality to the page, it will automatically be found and will simply work.

Wonderful!

So What About This Thing Called Conditioner?

We finally have our single loop, our one loop to rule all other loops, our king of loops, our hyper-loop. Ehm. Okay. We’ll just have to conclude that our is a loop of high quality and is so flexible that it can be re-used in each project (there’s not really anything project specific about it). That does not immediately make it library-worthy, it’s still quite a basic loop. However, we’ll find that our loop will require some additional trickery to really cover all our use-cases.

Let’s explore.

With the one loop, we are now loading our functionality automatically.

  1. We assign a data-module attribute to an element.
  2. We add a <script> tag to the page referencing our functionality.
  3. The loop matches the right functionality to each element.
  4. Boom!

Let’s take a look at what we need to add to our loop to make it a bit more flexible and re-usable. Because as it is now, while amazing, we’re going to run into trouble.

  • It would be handy if we moved the global functions to isolated modules. This prevents pollution of the global scope. Makes our modules more portable to other projects. And we’ll no longer have to add our <script> tags manually. Fewer things to add to the page, fewer things to maintain.
  • When using our portable modules across multiple projects (and/or pages) we’ll probably encounter a situation where we need to pass configuration options to a module. Think API keys, labels, animation speeds. That’s a bit difficult at the moment as we can’t access the for-loop.
  • With the ever-growing diversity of devices out there we will eventually encounter a situation where we only want to load a module in a certain context. For instance, a menu that needs to be collapsed on small viewports. We don’t want to add if-statements to our loop. It’s beautiful as it is, we will not add if statements to our for-loop. Never.

That’s where Conditioner can help out. It encompasses all above functionality. On top of that, it exposes a plugin API so we can configure and expand Conditioner to exactly fit our project setup.

Let’s make that 1 Kilobyte jump and replace our initialization loop with Conditioner.

Switching To Conditioner

We can get the Conditioner library from the GitHub repository, npm or from unpkg. For the rest of the article, we’ll assume the Conditioner script file has been added to the page.

The fastest way is to add the unpkg version.

<script src="https://unpkg.com/conditioner-core/conditioner-core.js"></script>

With Conditioner added to the page lets take a moment of silence and say farewell to our killer for-loop.

Conditioners default behavior is exactly the same as our now departed for-loop. It’ll search for elements with the data-module attribute and link them to globally scoped JavaScript functions.

We can start this process by calling the conditioner hydrate method.

<input type="text" data-module="createAutocomplete"/>

<script src="autocomplete.js"></script>

<script>
conditioner.hydrate(document.documentElement);
</script>

Go to demo →

Note that we pass the documentElement to the hydrate method. This tells Conditioner to search the subtree of the <html> element for elements with the data-module attribute.

It basically does this:

document.documentElement.querySelectorAll('[data-module]');

Okay, great! We’re set to take it to the next level. Let’s try to replace our globally scoped JavaScript functions with modules. Modules are reusable pieces of JavaScript that expose certain functionality for use in your scripts.

Moving From Global Functions To Modules

In this article, our modules will follow the new ES Module standard, but the examples will also work with modules based on the Universal Module Definition or UMD.

Step one is turning the createAutocomplete function into a module. Let’s create a file called autocomplete.js. We’ll add a single function to this file and make it the default export.

export default function(element) 
  // autocomplete logic
  // ...

It’s the same as our original function, only prepended with export default.

For the other code snippets, we’ll switch from our classic function to arrow functions.

export default element => 
  // autocomplete logic
  // ...

We can now import our autocomplete.js module and use the exported function like this:

import('./autocomplete.js').then(module => 
  // the autocomplete function is located in module.default
);

Note that this only works in browsers that support Dynamic import(). At the time of this writing that would be Chrome 63 and Safari 11.

Okay, so we now know how to create and import modules, our next step is to tell Conditioner to do the same.

We update the data-module attribute to ./autocomplete.js so it matches our module file name and relative path.

Remember: The import() method requires a path relative to the current module. If we don’t prepend the autocomplete.js filename with ./ the browser won’t be able to find the module.

Conditioner is still busy searching for functions on the global scope. Let’s tell it to dynamically load ES Modules instead. We can do this by overriding the moduleImport action.

We also need to tell it where to find the constructor function (module.default) on the imported module. We can point Conditioner in the right direction by overriding the moduleGetConstructor action.

<input type="text" data-module="./autocomplete.js"/>

<script>
conditioner.addPlugin(
  // fetch module with dynamic import
  moduleImport: (name) => import(name),
  
  // get the module constructor
  moduleGetConstructor: (module) => module.default
);

conditioner.hydrate(document.documentElement);
</script>

Go to demo →

Done!

Conditioner will now automatically lazy load ./autocomplete.js, and once received, it will call the module.default function and pass the element as a parameter.

Defining our autocomplete as ./autocomplete.js is very verbose. It’s difficult to read, and when adding multiple modules on the page, it quickly becomes tedious to write and error prone.

This can be remedied by overriding the moduleSetName action. Conditioner views the data-module value as an alias and will only use the value returned by moduleSetName as the actual module name. Let’s automatically add the js extension and relative path prefix to make our lives a bit easier.

<input type="text" data-module="autocomplete"/>
conditioner.addPlugin(
  // converts module aliases to paths
  moduleSetName: (name) => `./$ name .js`
});

Go to demo →

Now we can set data-module to autocomplete instead of ./autocomplete.js, much better.

That’s it! We’re done! We’ve setup Conditioner to load ES Modules. Adding modules to a page is now as easy as creating a module file and adding a data-module attribute.

The plugin architecture makes Conditioner super flexible. Because of this flexibility, it can be modified for use with a wide range of module loaders and bundlers. There’s bootstrap projects available for Webpack, Browserify and RequireJS.

Please note that Conditioner does not handle module bundling. You’ll have to configure your bundler to find the right balance between serving a bundled file containing all modules or a separate file for each module. I usually cherry pick tiny modules and core UI modules (like navigation) and serve them in a bundled file while conditionally loading all scripts further down the page.

Alright, module loading — check! It’s now time to figure out how to pass configuration options to our modules. We can’t access our loop; also we don’t really want to, so we need to figure out how to pass parameters to the constructor functions of our modules.

Passing Configuration Options To Our Modules

I might have bent the truth a little bit. Conditioner has no out-of-the-box solution for passing options to modules. There I said it. To keep Conditioner as tiny as possible I decided to strip it and make it available through the plugin API. We’ll explore some other options of passing variables to modules and then use the plugin API to set up an automatic solution.

The easiest and at the same time most banal way to create options that our modules can access is to define options on the global window scope.

window.autocompleteSource = './api/query';
export default (element) => 
  console.log(window.autocompleteSource);
  // will log './api/query'
  
  // autocomplete logic
  // ...

Don’t do this.

It’s better to simply add additional data attributes.

<input type="text" 
       data-module="autocomplete" 
       data-source="./api/query"/>

These attributes can then be accessed inside our module by accessing the element dataset which returns a DOMStringMap of all data attributes.

export default (element) => 
  console.log(element.dataset.source);
  // will log './api/query'
  
  // autocomplete logic
  // ...

This could result in a bit of repetition as we’ll be accessing element.dataset in each module. If repetition is not your thing, read on, we’ll fix it right away.

We can automate this by extracting the dataset and injecting it as an options parameter when mounting the module. Let’s override the moduleSetConstructorArguments action.

conditioner.addPlugin(

  // the name of the module and the element it's being mounted to
  moduleSetConstructorArguments: (name, element) => ([
    element, 
    element.dataset
  ])
  
);

The moduleSetConstructorArguments action returns an array of parameters which will automatically be passed to the module constructor.

export default (element, options) => 
  console.log(options.source);
  // will log './api/query'
  
  // autocomplete logic
  // ...

We’ve only eliminated the dataset call, i.e. seven characters. Not the biggest improvement, but we’ve opened the door to take this a bit further.

Suppose we have multiple autocomplete modules on the page, and each and every single one of them requires the same API key. It would be handy if that API key was supplied automagically instead of having to add it as a data attribute on each element.

We can improve our developer lives by adding a page level configuration object.

const pageOptions = 
  // the module alias
  autocomplete: 
    key: 'abc123' // api key
  
}

conditioner.addPlugin(

  // the name of the module and the element it's being mounted to
  moduleSetConstructorArguments: (name, element) => ([
    element, 
    // merge the default page options with the options set on the element it self
    Object.assign(, 
      defaultOptions[element.dataset.module], 
      element.dataset
    )
  ])
  
});

Go to demo →

As our pageOptions variable has been defined with const it’ll be block-scoped, which means it won’t pollute the global scope. Nice.

Using Object.assign we merge an empty object with both the pageOptions for this module and the dataset DOMStringMap found on the element. This will result in an options object containing both the source property and the key property. Should one of the autocomplete elements on the page have a data-key attribute, it will override the pageOptions default key for that element.

const ourOptions = Object.assign(
  {}, 
   key: 'abc123' , 
   source: './api/query' 
);

console.log(ourOptions);
// output:   key: 'abc123', source: './api/query' 

That’s some top-notch developer convenience right there.

By having added this tiny plugin, we can automatically pass options to our modules. This makes our modules more flexible and therefore re-usable over multiple projects. We can still choose to opt-out and use dataset or globally scope our configuration variables (no, don’t), whatever fits best.

Our next challenge is the conditional loading of modules. It’s actually the reason why Conditioner is named Conditioner. Welcome to the inner circle!

Conditionally Loading Modules Based On User Context

Back in 2005, desktop computers were all the rage, everyone had one, and everyone browsed the web with it. Screen resolutions ranged from big to bigger. And while users could scale down their browser windows, we looked the other way and basked in the glory of our beautiful fixed-width sites.

I’ve rendered an artist impression of the 2005 viewport:


A rectangular area illustrating a single viewport size of 1024 pixels by 768 pixels



The 2005 viewport in its full glory, 1024 pixels wide, and 768 pixels high. Wonderful.

Today, a little over ten years later, there’s more people browsing the web on mobile than on desktop, resulting in lots of different viewports.

I’ve applied this knowledge to our artist impression below.


Multiple overlapping rectangles illustrating a high amount of different viewport sizes


More viewports than you can shake a stick at.

Holy smokes! That’s a lot of viewports.

Today, someone might visit your site on a small mobile device connected to a crazy fast WiFi hotspot, while another user might access your site using a desktop computer on a slow tethered connection. Yes, I switched up the connection speeds — reality is unpredictable.

And to think we were worried about users resizing their browser window. Hah!

Note that those million viewports are not set in stone. A user might load a website in portrait orientation and then rotate the device, (or, resize the browser window), all without reloading the page. Our websites should be able to handle this and load or unload functionality accordingly.

Someone on a tiny device should not receive the same JavaScript package as someone on a desktop device. That seems hardly fair; it’ll most likely result in a sub-optimal user experience on both the tiny mobile device and the good ol’ desktop device.

With Conditioner in place, let’s configure it as a gatekeeper and have it load modules based on the current user context. The user context contains information about the environment in which the user is interacting with your functionality. Some examples of environment variables influencing context are viewport size, time of day, location, and battery level. The user can also supply you with context hints, for instance, a preference for reduced motion. How a user behaves on your platform will also tell you something about the context she might be in, is this a recurring visit, how long is the current user session?

The better we’re able to measure these environment variables the better we can enhance our interface to be appropriate for the context the user is in.

We’ll need an attribute to describe our modules context requirements so Conditioner can determine the right moment for the module to load and to unload. We’ll call this attribute data-context. It’s pretty straightforward.

Let’s leave our lovely autocomplete module behind and shift focus to a new module. Our new section-toggle module will be used to hide the main navigation behind a toggle button on small viewports.

Since it should be possible for our section-toggle to be unloaded, the default function returns another function. Conditioner will call this function when it unloads the module.

export default (element) => 
  // sectionToggle logic
  // ...

  return () => 
    // sectionToggle unload logic
    // ...
  
}

We don’t need the toggle behavior on big viewports as those have plenty of space for our menu (it’s a tiny menu). We only want to collapse our menu on viewports more narrow than 30em (this translates to 480px).

Let’s setup the HTML.

<nav>
  <h1 data-module="sectionToggle" 
      data-context="@media (max-width:30em)">
      Navigation
  </h1>
  <ul>
    <li><a href="/home">home</a></li>
    <li><a href="/about">about</a></li>
    <li><a href="/contact">contact</a></li>
  </ul>
</nav>

Go to demo →

The data-context attribute will trigger Conditioner to automatically load a context monitor observing the media query (max-width:30em). When the user context matches this media query, it will load the module; when it does not, or no longer does, it will unload the module.

Monitoring happens based on events. This means that after the page has loaded, should the user resize the viewport or rotate the device, the user context is re-evaluated and the module is loaded or unloaded based on the new observations.

You can view monitoring as feature detection. Where feature detection is about an on/off situation, the browser either supports WebGL, or it doesn’t. Context monitoring is a continuous process, the initial state is observed at page load, but monitoring continues after. While the user is navigating the page, the context is monitored, and observations can influence page state in real-time.

This nonstop monitoring is important as it allows us to adapt to context changes immediately (without page reload) and optimizes our JavaScript layer to fit each new user context like a glove.

The media query monitor is the only monitor that is available by default. Adding your own custom monitors is possible using the plugin API. Let’s add a visible monitor which we’ll use to determine if an element is visible to the user (scrolled into view). To do this, we’ll use the brand new IntersectionObserver API.

conditioner.addPlugin(
  // the monitor hook expects a configuration object
  monitor: 
    // the name of our monitor with the '@'
    name: 'visible',

    // the create method will return our monitor API
    create: (context, element) => (

      // current match state
      matches: false,

      // called by conditioner to start listening for changes
      addListener (change) 

        new IntersectionObserver(entries => 

          // update the matches state
          this.matches = entries.pop().isIntersecting == context;

          // inform Conditioner of the state change
          change();

        ).observe(element);

      }
    })
  }
});

We now have a visible monitor at our disposal.

Let’s use this monitor to only load images when they are scrolled in to view.

Our base image HTML will be a link to the image. When JavaScript fails to load the links will still work, and the contents of the link will describe the image. This is progressive enhancement at work.

<a href="cat-nom.jpg" 
   data-module="lazyImage" 
   data-context="@visible">
   A red cat eating a yellow bird
</a>

Go to demo →

The lazyImage module will extract the link text, create an image element, and set the link text to the alt text of the image.

export default (element) => 

  // store original link text
  const text = element.textContent;

  // replace element text with image
  const image = new Image();
  image.src = element.href;
  image.setAttribute('alt', text);
  element.replaceChild(image, element.firstChild);
  
  return () => 
    // restore original element state
    element.innerHTML = text
  
}

When the anchor is scrolled into view, the link text is replaced with an img tag.

Because we’ve returned an unload function the image will be removed when the element scrolls out of view. This is most likely not what we desire.

We can remedy this behavior by adding the was operator. It will tell Conditioner to retain the first matched state.

<a href="cat-nom.jpg" 
   data-module="lazyImage" 
   data-context="was @visible">
   A red cat eating a yellow bird
</a>

There are three other operators at our disposal.

The not operator lets us invert a monitor result. Instead of writing @visible false we can write not @visible which makes for a more natural and relaxed reading experience.

Last but not least, we can use the or and and operators to string monitors together and form complex context requirements. Using and combined with or we can do lazy image loading on small viewports and load all images at once on big viewports.

<a href="cat-nom.jpg" 
   data-module="lazyImage" 
   data-context="was @visible and @media (max-width:30em) or @media (min-width:30em)">
   A red cat eating a yellow bird
</a>

We’ve looked at the @media monitor and have added our custom @visible monitor. There are lots of other contexts to measure and custom monitors to build:

  • Tap into the Geolocation API and monitor the location of the user @location (near: 51.4, 5.4) to maybe load different scripts when a user is near a certain location.
  • Imagine a @time monitor, which would make it possible to enhance a page dynamically based on the time of day @time (after 20:00).
  • Use the Device Light API to determine the light level @lightlevel (max-lumen: 50) at the location of the user. Which, combined with the time, could be used to perfectly tune page colors.

By moving context monitoring outside of our modules, our modules have become even more portable. If we need to add collapsible sections to one of our pages, it’s now easy to re-use our section toggle module, because it’s not aware of the context in which it’s used. It just wants to be in charge of toggling something.

And this is what Conditioner makes possible, it extracts all distractions from the module and allows you to write a module focused on a single task.

Using Conditioner In JavaScript

Conditioner exposes a total of three methods. We’ve already encountered the hydrate and addPlugin methods. Let’s now have a look at the monitor method.

The monitor method lets us manually monitor a context and receive context updates.

const monitor = conditioner.monitor('@media (min-width:30em)');
monitor.onchange = (matches) => 
  // called when a change to the context was observed
;
monitor.start();

This method makes it possible to do context monitoring from JavaScript without requiring the DOM starting point. This makes it easier to combine Conditioner with frameworks like React, Angular or Vue to help with context monitoring.

As a quick example, I’ve built a React <ContextRouter> component that uses Conditioner to monitor user context queries and switch between views. It’s heavily inspired by React Router so might look familiar.

<ContextRouter>
    <Context query="@media (min-width:30em)" 
             component= FancyInfoGraphic />
    <Context>
        // fallback to use on smaller viewports
        <table/>
    </Context>
</ContextRouter>

I hope someone out there is itching to convert this to Angular. As a cat and React person I just can’t get myself to do it.

Conclusion

Replacing our initialization script with the killer for loop created a single entity in charge of loading modules. From that change, automatically followed a set of requirements. We used Conditioner to fulfill these requirements and then wrote custom plugins to extend Conditioner where it didn’t fit our needs.

Not having access to our single for loop, steered us towards writing more re-usable and flexible modules. By switching to dynamic imports we could then lazy load these modules, and later load them conditionally by combining the lazy loading with context monitoring.

With conditional loading, we can quickly determine when to send which module over the connection, and by building advanced context monitors and queries, we can target more specific contexts for enhancement.

By combining all these tiny changes, we can speed up page load time and more closely match our functionality to each different context. This will result in improved user experience and as a bonus improve our developer experience as well.

Smashing Editorial
(rb, ra, hj, il)

Continue reading: 

Lazy Loading JavaScript Modules With ConditionerJS

Building A Static Site With Components Using Nunjucks

It’s quite popular these days, and dare I say a damn fine idea, to build sites with components. Rather than building out entire pages one by one, we build a system of components (think: a search form, an article card, a menu, a footer) and then piece together the site with those components.

JavaScript frameworks like React and Vue emphasize this idea heavily. But even if you don’t use any client-side JavaScript at all to build a site, it doesn’t mean you have to give up on the idea of building with components! By using an HTML preprocessor, we can build a static site and still get all the benefits of abstracting our site and its content into re-usable components.

Static sites are all the rage these days, and rightfully so, as they are fast, secure, and inexpensive to host. Even Smashing Magazine is a static site, believe it or not!

Let’s take a walk through a site I built recently using this technique. I used CodePen Projects to build it, which offers Nunjucks as a preprocessor, which was perfectly up for the job.

This is a microsite. It doesn’t need a full-blown CMS to handle hundreds of pages. It doesn’t need JavaScript to handle interactivity. But it does need a handful of pages that all share the same layout.


Consistent header and footer


Consistent header and footer across all pages

HTML alone doesn’t have a good solution for this. What we need are imports. Languages like PHP make this simple with things like <?php include "header.php"; ?>, but static file hosts don’t run PHP (on purpose) and HTML alone is no help. Fortunately, we can preprocess includes with Nunjucks.


Importing components into pages


Importing components is possible in languages like PHP

It makes perfect sense here to create a layout, including chunks of HTML representing the header, navigation, and footer. Nunjucks templating has the concept of blocks, which allow us to slot in content into that spot when we use the layout.

<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <title>The Power of Serverless</title>
  <link rel="stylesheet" href="/styles/style.processed.css">
</head>
  
<body>
  
  % include "./template-parts/_header.njk" %
  
  % include "./template-parts/_nav.njk" %
  
  % block content %
  % endblock %
  
  % include "./template-parts/_footer.njk" %
  
</body>

Notice the files that are included are named like _file.njk. That’s not entirely necessary. It could be header.html or icons.svg, but they are named like this because 1) files that start with underscores are a bit of-of a standard way of saying they are a partial. In CodePen Projects, it means they won’t try to be compiled alone. 2) By naming it .njk, we could use more Nunjucks stuff in there if we want to.

None of these bits have anything special in them at all. They are just little bits of HTML intended to be used on each of our four pages.

<footer>
  <p>Just a no-surprises footer, people. Nothing to see here.<p>
</footer>

Done this way, we can make one change and have the change reflected on all four pages.

Using The Layout For The Four Pages

Now each of our four pages can be a file. Let’s just start with index.njk though, which in CodePen Projects, will automatically be processed and create an index.html file every time you save.


The index.njk file


Starting off with an index.njk file

Here’s what we could put in index.njk to use the layout and drop some content in that block:

% extends "_layout.njk" %

% block content %
<h1>Hello, World!</h1>
% endblock % 

That will buy us a fully functional home page! Nice! Each of the four pages can do the same exact thing, but putting different content in the block, and we have ourselves a little four-page site that is easy to manage.


Compiled index.html


The index.njk file gets compiled into index.html

For the record, I’m not sure I’d call these little chunks we re-using components. We’re just being efficient and breaking up a layout into chunks. I think of a component more like a re-usable chunk that accepts data and outputs a unique version of itself with that data. We’ll get to that.

Making Active Navigation

Now that we’ve repeated an identical chunk of HTML on four pages, is it possible to apply unique CSS to individual navigation items to identify the current page? We could with JavaScript and looking at window.location and such, but we can do this without JavaScript. The trick is putting a class on the <body> unique to each page and using that in the CSS.

In our _layout.njk we have the body output a class name as a variable:

<body class=" body_class }">

Then before we call that layout on an indivdiual page, we set that variable:

% set body_class = "home" %
% extends "_layout.njk" %

Let’s say our navigation was structured like

<nav class="site-nav">
  <ul>
    <li class="nav-home">
      <a href="/">
        Home
      </a>
      ...

Now we can target that link and apply special styling as needed by doing:

body.home .nav-home a,
body.services .nav-services a  /* continue matching classes for all pages... */
  /* unique active state styling */

Active state styling on navigation
Styling navigation links with an active class.

Oh and those icons? Those are just individual .svg files I put in a folder and included like

% include "../icons/cloud.svg" %

And that allows me to style them like:

svg 
  fill: white;

Assuming the SVG elements inside have no fill attributes already on them.

Authoring Content In Markdown

The homepage of my microsite has a big chunk of content on it. I could certainly write and maintain that in HTML itself, but sometimes it’s nice to leave that type of thing to Markdown. Markdown feels cleaner to write and perhaps a bit easier to look at when it’s lots of copy.

This is very easy in CodePen Projects. I made a file that ends in .md, which will automatically be processed into HTML, then included that in the index.njk file.

Markdown compiled into HTML on CodePen Projects
Files in markdown get compiled into HTML on CodePen Projects.
% block content %
<main class="centered-text-column"> 
% include "content/about.html" % 
</main>
% endblock %

Building Actual Components

Let’s consider components to be repeatable modules that as passed in data to create themselves. In frameworks like Vue, you’d be working with single file components that are isolated bits of templated HTML, scoped CSS, and component-specific JavaScript. That’s super cool, but our microsite doesn’t need anything that fancy.

We need to create some “cards” based on a simple template, so we can build things like this:


Card style components


Creating repeatable components with templates

Building a repeatable component like that in Nunjucks involves using what they call Macros. Macros are deliciously simple. They are like as if HTML had functions!

% macro card(title, content) %
<div class="card">
  <h2> title }</h2>
  <p> content }</p>
</div>
% endmacro %

Then you call it as needed:

 card('My Module', 'Lorem ipsum whatever.') }

The whole idea here is to separate data and markup. This gives us some pretty clear, and tangible benefits:

  1. If we need to make a change to the HTML, we can change it in the macro and it gets changed everywhere that uses that macro.
  2. The data isn’t tangled up in markup
  3. The data could come from anywhere! We code the data right into calls to the macros as we’ve done above. Or we could reference some JSON data and loop over it. I’m sure you could even imagine a setup in which that JSON data comes from a sort of headless CMS, build process, serverless function, cron job, or whatever.

Now we have these repeatable cards that combine data and markup, just what we need:


Data and markup for the component is kept separate


HTML is controlled in the macro, while data can come from anywhere

Make As Many Components As You Like

You can take this idea and run with it. For example, imagine how Bootstrap is essentially a bunch of CSS that you follow HTML patterns in which to use. You could make each of those patterns a macro and call them as needed, essentially componentizing the framework.

You can nest components if you like, embracing a sort of atomic design philosophy. Nunjucks offers logic as well, meaning you can create conditional components and variations just by passing in different data.

In the simple site I made, I made a different macro for the ideas section of the site because it involved slightly different data and a slightly different card design.


Card components in Ideas section


It’s possible to create as many components as you want

A Quick Case Against Static Sites

I might argue that most sites benefit from a component-based architecture, but only some sites are appropriate for being static. I work on plenty of sites in which having back-end languages is appropriate and useful.

One of my sites, CSS-Tricks, has things like a user login with a somewhat complex permissions system: forums, comments, eCommerce. While none of those things totally halt the idea of working staticly, I’m often glad I have a database and back-end languages to work with. It helps me build what I need and keeps things under one roof.

Go Forth And Embrace The Static Life!

Remember that one of the benefits of building in the way we did in this article is that the end result is just a bunch of static files. Easy to host, fast, and secure. Yet, we didn’t have to give up working in a developer-friendly way. This site will be easy to update and add to in the future.

  • The final project is a microsite called The Power of Serverless for Front-End Developers (https://thepowerofserverless.info/).
  • Static file hosting, if you ask me, is a part of the serverless movement.
  • You can see all the code (and even fork a copy for yourself) right on CodePen. It is built, maintained, and hosted entirely on CodePen using CodePen Projects.
  • CodePen Projects handles all the Nunjucks stuff we talked about here, and also things like Sass processing and image hosting, which I took advantage of for the site. You could replicate the same with, say, a Gulp or Grunt-based build process locally. Here’s a boilerplate project like that you could spin up.
Smashing Editorial
(ms, ra, hj, il)

Jump to original: 

Building A Static Site With Components Using Nunjucks

Hanapin’s PPC Experts Share How to Boost Your AdWords Quality Score with Landing Pages

It’s happened to the best of us. You return from lunch, pull up your AdWords account, and hover over a keyword only to realize you have a Quality Score of just three (ooof). You scan a few more keywords, and realize some others are sitting at fours, and you’ve even got a few sad twos.

Low Quality Scores like this are a huge red flag because they mean you’re likely paying through the nose for a given keyword without the guarantee of a great ad position. Moreover, you can’t necessarily bid your way into the top spot by increasing your budget.

You ultimately want to see healthy Quality Scores of around seven or above, because a good Quality Score can boost your Ad Rank, your resulting Search Impression Share, and will help your ads get served up more often.

To ensure your ads appear in top positions whenever relevant queries come up, today we’re sharing sage advice from PPC experts Jeff Baum and Diane Anselmo from Hanapin Marketing. During Marketing Optimization Week, they spoke to three things you can do with your landing pages today to increase your Quality Score, improve your Ad Rank, and pay less to advertise overall.

But first…

What is Quality Score (and why is it such a big deal?)

Direct from Google, Quality Score is an estimate of the quality of your ads, keywords, and landing pages. Higher quality ad experiences can lead to lower prices and better ad positions.

You may remember a time when Quality Score didn’t even exist, but it was introduced as a way for you to understand if you were serving up the best experiences possible. Upping your score per keyword (especially your most important ones) is important because it determines your Ad Rank in a major way:

Cost Per Click x Quality Score = Ad Rank

To achieve Quality Scores of seven and above you’ll need to consider three factors. We’re talkin’: relevancy, load time, and ease of navigation, which are consequently the very things Diane and Jeff say to focus on with your landing pages.

Below are the three actions Hanapin’s dynamic duo suggest you take to get the Ad Rank you deserve.

Where can you see AdWords Quality Score regularly?
If you’re not already keeping a close eye on this, simply navigate to Keywords and modify by adding the Quality Score column. Alternatively, you can hover over individual keywords to view case-by-case.

Tip 1) Convey the Exact Same Message From Ad to Landing Page

One of the perks of building custom landing pages fast, is the ability to carry through the exact same details from your ads to your landing pages. A consistent message between the two is key because it helps visitors recognize they’ve landed in the right place, and assures someone they’re on the right path to the outcome they searched for.

Here’s an ad to landing page combo Diane shared with us as an example:

Cool, 500 business cards for $8.50—got it. But when we click through to the landing page (which happens to be the brand’s homepage…)

  • The phone number from the ad doesn’t match the top of the page where we’ve landed.
  • The price in the ad headline doesn’t match the website’s headline exactly ($8.50 appears further down on the page, but could cause confusion).
  • While the ad’s CTA is to “order now”, the page we land on has tons to click on and offers up “Free Sample Kit” vs. an easy “Order Now’ option to match the ad. Someone may bounce quickly because of the amount of options presented.

As Jeff told us, the lesson here is that congruence builds trust. If you do everything to make sure your ads and landing pages are in sync, you’ll really benefit and likely see your Quality Score rise over time.

In a second example, we see strong message match play out really well for Vistaprint, wherein this is the ad:

And all of the ad’s details make it through to the subsequent landing page:

Improve your AdWords Quality Score with landing pages like Vistaprint's here.

In this case:

  • The price matches in the prominent sub-headline
  • The phone number matches the ad
  • Stocks, shapes and finishes are mentioned prominently on the landing page after they’re seen in the ad
  • The landing page conveys the steps involved in “getting started” (the CTA that appears most prominently).

Overall, the expectations are set up in the ad and fulfilled in the landing page, which is often a sign this advertiser is ideally paying less in the long run.

Remember: Google doesn’t tell you precisely what to fix.
As Jeff mentioned in Hanapin’s MOW talk, Google gives you a score, but doesn’t tell you exactly what to do to improve it. Luckily, we can help with reco’s around page speed, CTAs and more. Run your landing page through our Landing Page Analyzer to get solid recommendations for improving your landing pages.

Tip 2) Speed up your landing page’s load time

If you’re hit with a slow-loading page, you bounce quickly, and the same goes for prospects clicking through on your ads.

In fact, in an account Jeff was working on at Hanapin over the summer, in just one month they saw performance tank dramatically because of site speed. Noticing that most of the conversion drop off came from mobile, they quickly learned desktop visitors had a higher tolerance for slower load times, but they lost a ton of mobile prospects (from both form and phone) because of the lag.

Jeff recalls:

“we saw our ad click costs were going up, because our Quality Score was dropping due to the deficiency in site speed”.

Your landing page size (impacted by the images on your page) tends to slow load time, and—as we’ve seen with the Unbounce Landing Page Analyzer—82.2% of marketers have at least one image on their landing page that requires compression to speed things up.

As Jeff and Diane shared, you can check your page’s speed via Google’s free tool, Page Speed Insights and get their tips to improve. Furthermore, if you want to instantly get compressed versions of your images to swap out for a quick speed fix, you can also run your page through the Unbounce Landing Page Analyzer.

Pictured above: the downloadable images you can get via the Analyzer to improve your page speed and performance.

Tip 3) Ensure your landing page is easy to navigate

Using Diane’s analogy, you can think of a visit to your landing page like it’s a brick and mortar store. In other words, it’s the difference between arriving in a Nike store during Black Friday, and the same store any other time of the year. The former is a complete mess, and the latter is super organized.

Similarly, if your landing page experience is cluttered and visitors have to be patient to find what they’re looking for, you’ll see a higher bounce rate, which Google takes as a signal your landing page experience isn’t meeting needs.

Instead, you’ll want a clear information hierarchy. Meaning you cover need-to-know information quickly in a logical order, and your visitor can simply reach out and grab what they need as a next step. The difference is the visitor being able to get in and check out in a matter of minutes with what they wanted.

This seems easy, but as Diane says,

“Sometimes when thinking about designing sites, there’s so much we want people to do that we don’t realize that people need to be given information in steps. Do this first, then do that…”

As Jeff suggested, with landing pages, less can be more. So consider where you may need multiple landing pages for communicating different aspects of your offer or business. For example, if you own a bowling alley that contains a trampoline park and laser tag arena, you may want separate ads and landing pages for communicating the party packages for each versus cramming all the details on one page that doesn’t quite meet the needs of the person looking explicitly for a laser tag birthday party.

The better you signpost a clear path to conversion on your landing pages, the better chance you’ll have at a healthy Quality Score.

The job doesn’t really end

On a whole, Diane and Jeff help their clients at Hanapin achieve terrific Ad Rank by making their ad to landing pages combos as relevant as possible, optimizing load time, and ensuring content and options are well organized.

Quality Score is something you’ll need to monitor over time, and there’s no exact science to it. Google checks frequently, but it may be a few weeks until you see your landing page changes influence scores.

Despite no definitive date range, Diane encourages everyone to stay the course, and you will indeed see your Quality Score increase over time with these landing page fixes.

Original source:  

Hanapin’s PPC Experts Share How to Boost Your AdWords Quality Score with Landing Pages

Thumbnail

Designing A Perfect Responsive Configurator

Here’s a little challenge for you. How would you design a responsive interface for a custom car configurator? The customer should be able to adjust colors, wheels, exterior details, interior details and perhaps accessories — on small and large screens. Doesn’t sound that difficult, does it? In fact, we have all seen such interfaces before. Essentially, they are just a combination of some navigation, iconography, buttons, accordions and a real-time 3D preview.

View this article:

Designing A Perfect Responsive Configurator

A Comprehensive Guide To Mobile App Design

(This is a sponsored article.) More than ever, people are engaging with their phones in crucial moments. The average US user spends 5 hours per day on mobile. The vast majority of that time is spent in apps and on websites.
The difference between a good app and a bad app is usually the quality of its user experience (UX). A good UX is what separates successful apps from unsuccessful ones.

Source – 

A Comprehensive Guide To Mobile App Design

5 Mind-blowing Use Cases for Website Popups You’ve Never Considered (Includes Augmented Reality)

Okay, so perhaps only one of these use cases will blow your mind, but it’s worth risking being labeled as click-bait to get this in your hands. Read on for the coolest things you can do with website popups. Ever. Including augmented reality. Yup.

Example #1: The Augmented Reality Customer Postcard

Alright, people. Prepare to have your minds blown. This example comes from one of our designers, and chief hackers, at Unbounce, Luis Francisco.

Imagine the image below is a postcard you sent to your customers.

They visit the URL printed on it, and then this happens!

Watch me blow my own mind

Try it yourself

Note: This demo uses your laptop’s camera (it won’t work without one). Follow these instructions to see how it works!

  1. Print out the postcard image (opens in new tab)
  2. Open this landing page (opens in new tab)
  3. Grant access to your camera when asked by the browser.
  4. Hold the postcard in front of your camera to see the magic! (Stand a few feet back).

Example #2: The Website Login Hijack

35% of all visitors to Unbounce.com are only there to log in to the app. You read that correctly. Thirty-five percent. You can see the details in this GA screenshot from the month of January 2018.

This is an incredibly common thing for SaaS businesses, where customers will visit the homepage to click the login link. You’ll want to create a segment in Google Analytics for this, so you can remove it from your non-customer website behavior analysis.

It’s a huge opportunity for product marketing.

If you drop a cookie on your login screen that identifies the visitor as someone trying to log in, you can then use the cookie targeting built into Unbounce to target returning account holders with a website popup containing new product release info, along with a large login link that makes their experience even easier.

Click here or the image below to see an example popup.

Example #3: Social Referral Welcome

Are you doing as much as you can to convert your visitors from social? Probably not, but that’s okay. For this idea you can add an extra level of personalization by detecting the referring site (an Unbounce popup feature) and present a welcome experience relevant to that source.

You can take it a step further and have custom URL parameters on the social link that populate the popup with relevant content.

This is made possible by the Dynamic Text Replacement feature in Unbounce.

Check out the Tweet below. When I shared the blog post on Twitter, I added a URL parameter to the end of the URL so it reads:

https://postURL/?postTitle=“Maybe Later” - A New Interaction Model for E-commerce Entrance Popups

Try clicking the link in the Tweet. It will take you to our blog, and will show you a popup that’s only triggered when the referrer is Twitter (specifically a URL that contains t.co which is the Twitter URL shortener).

This is a really powerful way of connecting two previously disparate experiences, extending the information scent from one site to another. All without writing a single line of code.

Example #4: Preferred Social Network Share Request

If someone comes to you from twitter it’s a strong signal that Twitter is a social network of choice – or at least somewhere where they look for and respond to, socially shared content. As such you can give them a customized tweet ready for that network when they’ve demonstrated some engagement with your blog.

Using the referrer URL targeting option in Unbounce you can easily detect a visit from Twitter, Facebook, LinkedIn etc. Which is what I showed you in the previous example.

You can use different triggers for this concept that are likely to be more indicative of someone who’s engaged with the post. I’d suggest the scroll trigger (either up or down), time delay, or exit.

The reason I like this approach is that most people have a preferred social network. Mine is Twitter. If you give me a specific task, such as “Would you share this on Twitter for me, please?” with a Tweet button and prepared Tweet text, I’m more likely to engage versus having 5 social share buttons at the side or bottom of the post with no instructions.

Click here or the image below to see this concept in a popup.

You’d then craft a really good Tweet, with text or links specific to this tactic so you can measure its impact.

BTW: the button in that popup is functional and will actually Tweet about this blog post. I’d really love a share from you, just so you know. Show the popup again so you can Tweet it.

Example #5: Joke of the Day

Let’s end the post with a fun one. I’m sure you’ve all seen those messages or jokes that appear on Slack as it’s loading (it’s a thing). It can be fun to have that unusable time filled with something delightful.

Well, this is kinda like that, except that it’s not appearing during a loading sequence, it’s just straight up thrown in the face of your visitors. Because we need to experiment, people!!!!!!!!!

For bonus points, only show this to folks who have the cookie set in example #2 – “The Website Login Hijack” cos they’re customers and might appreciate it.

To do this, I took Unbounce co-founder and Chief Product Officer, Carter Gilchrist’s pet project “Good Bad Jokes” and embedded a random joke into an iframe in a popup. Boom!

Fair warning, some of these jokes are a little NSFW.

Click here for your Joke Of The Day.


Now go back to the top and try the augmented reality example again, and then share it on your preferred social network because it’s awesome, and that’s an awesome way to do business!

Cheers my dears,
Oli

Original source – 

5 Mind-blowing Use Cases for Website Popups You’ve Never Considered (Includes Augmented Reality)