Tag Archives: javascript

Thumbnail

Designing A Textbox, Unabridged




Designing A Textbox, Unabridged

Shane Hudson



Ever spent an hour (or even a day) working on something just to throw the whole lot away and redo it in five minutes? That isn’t just a beginner’s code mistake; it is a real-world situation that you can easily find yourself in especially if the problem you’re trying to solve isn’t well understood to begin with.

This is why I’m such a big proponent of upfront design, user research, and creating often multiple prototypes — also known as the old adage of “You don’t know what you don’t know.” At the same time, it is very easy to look at something someone else has made, which may have taken them quite a lot of time, and think it is extremely easy because you have the benefit of hindsight by seeing a finished product.

This idea that simple is easy was summed up nicely by Jen Simmons while speaking about CSS Grid and Piet Mondrian’s paintings:

“I feel like these paintings, you know, if you look at them with the sense of like ‘Why’s that important? I could have done that.’ It’s like, well yeah, you could paint that today because we’re so used to this kind of thinking, but would you have painted this when everything around you was Victorian — when everything around you was this other style?”

I feel this sums up the feeling I have about seeing websites and design systems that make complete sense; it’s almost as if the fact they make sense means they were easy to make. Of course, it is usually the opposite; writing the code is the simple bit, but it’s the thinking and process that goes into it that takes the most effort.

With that in mind, I’m going to explore building a text box, in an exaggeration of situations many of us often find ourselves in. Hopefully, by the end of this article, we can all feel more emphatic to how the journey from start to finish is rarely linear.

A Comprehensive Guide To User Testing

So you think you’ve designed something that’s perfect, but your test tells you otherwise. Let’s explore the importance of user testing. Read more →

Brief

We all know that careful planning and understanding of the user need is important to a successful project of any size. We also all know that all too often we feel to need to rush to quickly design and develop new features. That can often mean our common sense and best practices are forgotten as we slog away to quickly get onto the next task on the everlasting to-do list. Rinse and repeat.

Today our task is to build a text box. Simple enough, it needs to allow a user to type in some text. In fact, it is so simple that we leave the task to last because there is so much other important stuff to do. Then, just before we pack up to go home, we smirk and write:

<input type="text">

There we go!

Oh wait, we probably need to hook that up to send data to the backend when the form is submitted, like so:

<input type="text" name="our_textbox">

That’s better. Done. Time to go home.

How Do You Add A New Line?

The issue with using a simple text box is it is pretty useless if you want to type a lot of text. For a name or title it works fine, but quite often a user will type more text than you expect. Trust me when I say if you leave a textbox for long enough without strict validation, someone will paste the entire of War and Peace. In many cases, this can be prevented by having a maximum amount of characters.

In this situation though, we have found out that our laziness (or bad prioritization) of leaving it to the last minute meant we didn’t consider the real requirements. We just wanted to do another task on that everlasting to-do list and get home. This text box needs to be reusable; examples of its usage include as a content entry box, a Twitter-style note box, and a user feedback box. In all of those cases, the user is likely to type a lot of text, and a basic text box would just scroll sideways. Sometimes that may be okay, but generally, that’s an awful experience.

Thankfully for us, that simple mistake doesn’t take long to fix:

<textarea name="our_textbox"></textarea>

Now, let’s take a moment to consider that line. A <textarea>: as simple as it can get without removing the name. Isn’t it interesting, or is it just my pedantic mind that we need to use a completely different element to add a new line? It isn’t a type of input, or an attribute used to add multi-line to an input. Also, the <textarea> element is not self-closing but an input is? Strange.

This “moment to consider” sent me time traveling back to October 1993, trawling through the depths of the www-talk mailing list. There was clearly much discussion about the future of the web and what “HTML+” should contain. This was 1993 and they were discussing ideas such as <input type="range"> which wasn’t available until HTML5, and Jim Davis said:

“Well, it’s far-fetched I suppose, but you might use HTML forms as part of a game playing interface.”

This really does show that the web wasn’t just intended to be about documents as is widely believed. Marc Andreessen suggested to have <input type="textarea"> instead of allowing new lines in the single-line text type, [saying]: (http://1997.webhistory.org/www.lists/www-talk.1993q4/0200.html)

“Makes the browser code cleaner — they have to be handled differently internally.”

That’s a fair reason to have <textarea> separate to text, but that’s still not what we ended up with. So why is <textarea> its own element?

I didn’t find any decision in the mailing list archives, but by the following month, the HTML+ Discussion Document had the <textarea> element and a note saying:

“In the initial design for forms, multi-line text fields were supported by the INPUT element with TYPE=TEXT. Unfortunately, this causes problems for fields with long text values as SGML limits the length of attributea literals. The HTML+ DTD allows for up to 1024 characters (the SGML default is only 240 characters!)”

Ah, so that’s why the text goes within the element and cannot be self-closing; they were not able to use an attribute for long text. In 1994, the <textarea> element was included, along with many others from HTML+ such as <option> in the HTML 2 spec.

Okay, that’s enough. I could easily explore the archives further but back to the task.

Styling A <textarea>

So we’ve got a default <textarea>. If you rarely use them or haven’t seen the browser defaults in a long time, then you may be surprised. A <textarea> (made almost purely for multi-line text) looks very similar to a normal text input except most browser defaults style the border darker, the box slightly larger, and there are lines in the bottom right. Those lines are the resize handle; they aren’t actually part of the spec so browsers all handle (pun absolutely intended) it in their own way. That generally means that the resize handle cannot be restyled, though you can disable resizing by setting resize: none to the <textarea>. It is possible to create a custom handle or use browser specific pseudo elements such as ::-webkit-resizer.


The default <code>&lt;textarea&gt;</code> looks very small with a grey border and three lines as a resize handle.


A default textarea with no styling (Large preview)

It’s important to understand the defaults, especially because of the resizing ability. It’s a very unique behavior; the user is able to drag to change the size of the element by default. If you don’t override the minimum and maximum sizes then the size could be as small as 9px × 9px (when I checked Chrome) or as large as they have patience to drag it. That’s something that could cause mayhem with the rest of the site’s layout if it’s not considered. Imagine a grid where <textarea> is in one column and a blue box is in another; the size of the blue box is purely decided by the size of the <textarea>.

Other than that, we can approach styling a <textarea> much the same as any other input. Want to change the grey around the edge into thick green dashes? Sure here you go: border: 5px dashed green;. Want to restyle the focus in which a lot of browsers have a slightly blurred box shadow? Change the outline — responsibly though, you know, that’s important for accessibility. You can even add a background image to your <textarea> if that interests you (I can think of a few ideas that would have been popular when skeuomorphic design was more celebrated).

Scope Creep

We’ve all experienced scope creep in our work, whether it is a client that doesn’t think the final version matches their idea or you just try to squeeze in a tiny tweak and end up taking forever to finish it. So I ( enjoying creating the persona of an exaggerated project manager telling us what we need to build) have decided that our <textarea> just is not good enough. Yes, it is now multi-line, and that’s great, and yes it even ‘pops’ a bit more with its new styling. Yet, it just doesn’t fit the very vague user need that I’ve pretty much just thought of now after we thought we were almost done.

What happens if the user puts in thousands of words? Or drags the resize handle so far it breaks the layout? It needs to be reusable, as we have already mentioned, but in some of the situations (such as a ‘Twittereqsue’ note taking box), we will need a limit. So the next task is to add a character limit. The user needs to be able to see how many characters they have left.

In the same way we started with <input> instead of <textarea>, it is very easy to think that adding the maxlength attribute would solve our issue. That is one way to limit the amount of characters the user types, it uses the browser’s built-in validation, but it is not able to display how many characters are left.

We started with the HTML, then added the CSS, now it is time for some JavaScript. As we’ve seen, charging along like a bull in a china shop without stopping to consider the right approaches can really slow us down in the long run. Especially in situations where there is a large refactor required to change it. So let’s think about this counter; it needs to update as the user types, so we need to trigger an event when the user types. It then needs to check if the amount of text is already at the maximum length.

So which event handler should we choose?

  • change
    Intuitively, it may make sense to choose the change event. It works on <textarea> and does what it says on the tin. Except, it only triggers when the element loses focus so it wouldn’t update while typing.
  • keypress
    The keypress event is triggered when typing any character, which is a good start. But it does not trigger when characters are deleted, so the counter wouldn’t update after pressing backspace. It also doesn’t trigger after a copy/paste.
  • keyup
    This one gets quite close, it is triggered whenever a key has been pressed (including the backspace button). So it does trigger when deleting characters, but still not after a copy/paste.
  • input
    This is the one we want. This triggers whenever a character is added, deleted or pasted.

This is another good example of how using our intuition just isn’t enough sometimes. There are so many quirks (especially in JavaScript!) that are all important to consider before getting started. So the code to add a counter that updates needs to update a counter (which we’ve done with a span that has a class called counter) by adding an input event handler to the <textarea>. The maximum amount of characters is set in a variable called maxLength and added to the HTML, so if the value is changed it is changed in only one place.

var textEl = document.querySelector('textarea')
var counterEl = document.querySelector('.counter')
var maxLength = 200
    
textEl.setAttribute('maxlength', maxLength)
textEl.addEventListener('input', (val) => 
var count = textEl.value.length
counterEl.innerHTML = $count/$maxLength
})

Browser Compatibility And Progressive Enhancement

Progressive enhancement is a mindset in which we understand that we have no control over what the user exactly sees on their screen, and instead, we try to guide the browser. Responsive Web Design is a good example, where we build a website that adjusts to suit the content on the particular size viewport without manually setting what each size would look like. It means that on the one hand, we strongly care that a website works across all browsers and devices, but on the other hand, we don’t care that they look exactly the same.

Currently, we are missing a trick. We haven’t set a sensible default for the counter. The default is currently “0/200” if 200 were the maximum length; this kind of makes sense but has two downsides. The first, it doesn’t really make sense at first glance. You need to start typing before it is obvious the 0 updates as you type. The other downside is that the 0 updates as you type, meaning if the JavaScript event doesn’t trigger properly (maybe the script did not download correctly or uses JavaScript that an old browser doesn’t support such as the double arrow in the code above) then it won’t do anything. A better way would be to think carefully beforehand. How would we go about making it useful when it is both working and when it isn’t?

In this case, we could make the default text be “200 character limit.” This would mean that without any JavaScript at all, the user would always see the character limit but it just wouldn’t feedback about how close they are to the limit. However, when the JavaScript is working, it would update as they type and could say “200 characters remaining” instead. It is a very subtle change but means that although two users could get different experiences, neither are getting an experience that feels broken.

Another default that we could set is the maxlength on the element itself rather than afterwards with JavaScript. Without doing this, the baseline version (the one without JS) would be able to type past the limit.

User Testing

It’s all very well testing on various browsers and thinking about the various permutations of how devices could serve the website in a different way, but are users able to use it?

Generally speaking, no. I’m consistently shocked by user testing; people never use a site how you expect them to. This means that user testing is crucial.

It’s quite hard to simulate a user test session in an article, so for the purposes of this article, I’m going to just focus on one point that I’ve seen users struggle with on various projects.

The user is happily writing away, gets to 0 characters remaining, and then gets stuck. They forget what they were writing, or they don’t notice that it had stopped typing.

This happens because there is nothing telling the user that something has changed; if they are typing away without paying much attention, then they can hit the maximum length without noticing. This is a frustrating experience.

One way to solve this issue is to allow overtyping, so the maximum length still counts for it to be valid when submitted but it allows the user to type as much as they want and then edit it before submission. This is a good solution as it gives the control back to the user.

Okay, so how do we implement overtyping? Instead of jumping into the code, let’s step through in theory. maxlength doesn’t allow overtyping, it just stops allowing input once it hits the limit. So we need to remove maxlength and write a JS equivalent. We can use the input event handler as we did before, as we know that works on paste, etc. So in that event, the handler would check if the user has typed more than the limit, and if so, the counter text could change to say “10 characters too many.” The baseline version (without the JS) would no longer have a limit at all, so a useful middle ground could be to add the maxlength to the element in the HTML and remove the attribute using JavaScript.

That way, the user would see that they are over the limit without being cut off while typing. There would still need to be validation to make sure it isn’t submitted, but that is worth the extra small bit of work to make the user experience far better.


An example showing “17 characters too many” in red text next to a <code>&lt;textarea&gt;</code>.


Allowing the user to overtype (Large preview)

Designing The Overtype

This gets us to quite a solid position: the user is now able to use any device and get a decent experience. If they type too much it is not going to cut them off; instead, it will just allow it and encourage them to edit it down.

There’s a variety of ways this could be designed differently, so let’s look at how Twitter handles it:


A screenshot from Twitter showing their textarea with overtyped text with a red background.


Twitter’s <textarea> (Large preview)

Twitter has been iterating its main tweet <textarea> since they started the company. The current version uses a lot of techniques that we could consider using.

As you type on Twitter, there is a circle that completes once you get to the character limit of 280. Interestingly, it doesn’t say how many characters are available until you are 20 characters away from the limit. At that point, the incomplete circle turns orange. Once you have 0 characters remaining, it turns red. After the 0 characters, the countdown goes negative; it doesn’t appear to have a limit on how far you can overtype (I tried as far as 4,000 characters remaining) but the tweet button is disabled while overtyping.

So this works the same way as our <textarea> does, with the main difference being the characters represented by a circle that updates and shows the number of characters remaining after 260 characters. We could implement this by removing the text and replacing it with an SVG circle.

The other thing that Twitter does is add a red background behind the overtyped text. This makes it completely obvious that the user is going to need to edit or remove some of the text to publish the tweet. It is a really nice part of the design. So how would we implement that? We would start again from the beginning.

You remember the part where we realized that a basic input text box would not give us multiline? And that a maxlength attribute would not give us the ability to overtype? This is one of those cases. As far as I know, there is nothing in CSS that gives us the ability to style parts of the text inside a <textarea>. This is the point where some people would suggest web components, as what we would need is a pretend <textarea>. We would need some kind of element — probably a div — with contenteditable on it and in JS we would need to wrap the overtyped text in a span that is styled with CSS.

What would the baseline non-JS version look like then? Well, it wouldn’t work at all because while contenteditable will work without JS, we would have no way to actually do anything with it. So we would need to have a <textarea> by default and remove that if JS is available. We would also need to do a lot of accessibility testing because while we can trust a <textarea> to be accessible relying on browser features is a much safer bet than building your own components. How does Twitter handle it? You may have seen it; if you are on a train and your JavaScript doesn’t load while going into a tunnel then you get chucked into a decade-old legacy version of Twitter where there is no character limit at all.

What happens then if you tweet over the character limit? Twitter reloads the page with an error message saying “Your Tweet was over the character limit. You’ll have to be more clever.” No, Twitter. You need to be more clever.

Retro

The only way to conclude this dramatization is a retrospective. What went well? What did we learn? What would we do differently next time or what would we change completely?

We started very simple with a basic textbox; in some ways, this is good because it can be all too easy to overcomplicate things from the beginning and an MVP approach is good. However, as time went on, we realized how important it is to have some critical thinking and to consider what we are doing. We should have known a basic textbox wouldn’t be enough and that a way of setting a maximum length would be useful. It is even possible that if we have conducted or sat in on user research sessions in the past that we could have anticipated the need to allow overtyping. As for the browser compatibility and user experiences across devices, considering progressive enhancement from the beginning would have caught most of those potential issues.

So one change we could make is to be much more proactive about the thinking process instead of jumping straight into the task, thinking that the code is easy when actually the code is the least important part.

On a similar vein to that, we had the “scope creep” of maxlength, and while we could possibly have anticipated that, we would rather not have any scope creep at all. So everybody involved from the beginning would be very useful, as a diverse multidisciplinary approach to even small tasks like this can seriously reduce the time it takes to figure out and fix all the unexpected tweaks.

Back To The Real World

Okay, so I can get quite deep into this made-up project, but I think it demonstrates well how complicated the most seemingly simple tasks can be. Being user-focussed, having a progressive enhancement mindset, and thinking things through from the beginning can have a real impact on both the speed and quality of delivery. And I didn’t even mention testing!

I went into some detail about the history of the <textarea> and which event listeners to use, some of this can seem overkill, but I find it fascinating to gain a real understanding of the subtleties of the web, and it can often help demystify issues we will face in the future.

Smashing Editorial
(ra, il)


See the original post – 

Designing A Textbox, Unabridged

Thumbnail

Logging Activity With The Web Beacon API




Logging Activity With The Web Beacon API

Drew McLellan



The Beacon API is a JavaScript-based Web API for sending small amounts of data from the browser to the web server without waiting for a response. In this article, we’ll look at what that can be useful for, what makes it different from familiar techniques like XMLHTTPRequest (‘Ajax’), and how you can get started using it.

If you know why you want to use Beacon already, feel free to jump directly to the Getting Started section.

What Is The Beacon API For?

The Beacon API is used for sending small amounts of data to a server without waiting for a response. That last part is critical and is the key to why Beacon is so useful — our code never even gets to see a response, even if the server sends one. Beacons are specifically for sending data and then forgetting about it. We don’t expect a response and we don’t get a response.

Think of it like a postcard sent home when on vacation. You put a small amount of data on it (a bit of “Wish you were here” and “The weather’s been lovely”), put it in the mailbox, and you don’t expect a response. No one sends a return postcard saying “Yes, I do wish I was there actually, thank you very much!”

For modern websites and applications, there’s a number of use cases that fall very neatly into this pattern of send-and-forget.

Tracking Stats And Analytics Data

The first use case that comes to mind for most people is analytics. Big solutions like Google Analytics might give a good overview of things like page visits, but what if we wanted something more customized? We could write some JavaScript to track what’s happening in a page (maybe how a user interacts with a component, how far they’ve scrolled to, or which articles have been displayed before they follow a CTA) but we then need to send that data to the server when the user leaves the page. Beacon is perfect for this, as we’re just logging the data and don’t need a response.

There’s no reason we couldn’t also cover the sort of mundane tasks often handled by Google Analytics, reporting on the user themselves and the capability of their device and browser. If the user has a logged in session, you could even tie those stats back to a known individual. Whatever data you gather, you can send it back to the server with Beacon.

Debugging And Logging

Another useful application for this behavior is logging information from your JavaScript code. Imagine you have a complex interactive component on your page that works perfectly for all your tests, but occasionally fails in production. You know it’s failing, but you can’t see the error in order to begin debugging it. If you can detect a failure in the code itself, you could then gather up diagnostics and use Beacon to send it all back for logging.

In fact, any logging task can usefully be performed using Beacon, be that creating save-points in a game, collecting information on feature use, or recording results from a multivariate test. If it’s something that happens in the browser that you want the server to know about, then Beacon is likely a contender.

Can’t We Already Do This?

I know what you’re thinking. None of this is new, is it? We’ve been able to communicate from the browser to the server using XMLHTTPRequest for more than a decade. More recently we also have the Fetch API which does much the same thing with a more modern promise-based interface. Given that, why do we need the Beacon API at all?

The key here is that because we don’t get a response, the browser can queue up the request and send it without blocking execution of any other code. As far as the browser is concerned, it doesn’t matter if our code is still running or not, or where the script execution has got to, as there’s nothing to return it can just background the sending of the HTTP request until it’s convenient to send it.

That might mean waiting until CPU load is lower, or until the network is free, or even just sending it right away if it can. The important thing is that the browser queues the beacon and returns control immediately. It does not hold things up while the beacon sends.

To understand why this is a big deal, we need to look at how and when these sorts of requests are issued from our code. Take our example of an analytics logging script. Our code may be timing how long the users spend on a page, so it becomes critical that the data is sent back to the server at the last possible moment. When the user goes to leave a page, we want to stop timing and send the data back home.

Typically, you’d use either the unload or beforeunload event to execute the logging. These are fired when the user does something like following a link on the page to navigate away. The trouble here is that code running on one of the unload events can block execution and delay the unloading of the page. If unloading of the page is delayed, then the loading next page is also delayed, and so the experience feels really sluggish.

Keep in mind how slow HTTP requests can be. If you’re thinking about performance, typically one of the main factors you try to cut down on is extra HTTP requests because going out to the network and getting a response can be super slow. The very last thing you want to do is put that slowness between the activation of a link and the start of the request for the next page.

Beacon gets around this by queuing the request without blocking, returning control immediately back to your script. The browser then takes care of sending that request in the background without blocking. This makes everything much faster, which makes users happier and lets us all keep our jobs.

Getting Started

So we understand what Beacon is, and why we might use it, so let’s get started with some code. The basics couldn’t be simpler:

let result = navigator.sendBeacon(url, data);

The result is boolean, true if the browser accepted and queued the request, and false if there was a problem in doing so.

Using navigator.sendBeacon()

navigator.sendBeacon takes two parameters. The first is the URL to make the request to. The request is performed as an HTTP POST, sending any data provided in the second parameter.

The data parameter can be in one of several formats, all if which are taken directly from the Fetch API. This can be a Blob, a BufferSource, FormData or URLSearchParams — basically any of the body types used when making a request with Fetch.

I like using FormData for basic key-value data as it’s uncomplicated and easy to read back.

// URL to send the data to
let url = '/api/my-endpoint';
    
// Create a new FormData and add a key/value pair
let data = new FormData();
data.append('hello', 'world');
    
let result = navigator.sendBeacon(url, data);
    
if (result)  
  console.log('Successfully queued!');
 else 
  console.log('Failure.');

Browser Support

Support in browsers for Beacon is very good, with the only notable exceptions being Internet Explorer (works in Edge) and Opera Mini. For most uses, that should be fine, but it’s worth testing for support before trying to use navigator.sendBeacon.

That’s easy to do:

if (navigator.sendBeacon) 
  // Beacon code
 else 
  // No Beacon. Maybe fall back to XHR?

If Beacon isn’t available and your request is important, you could fall back to a blocking method such as XHR. Depending on your audience and purpose, you might equally choose to not bother.

An Example: Logging Time On A Page

To see this in practice, let’s create a basic system to time how long a user stays on a page. When the page loads we’ll note the time, and when the user leaves the page we’ll send the start time and current time to the server.

As we only care about time spent (not the actual time of day) we can use performance.now() to get a basic timestamp as the page loads:

let startTime = performance.now();

If we wrap up our logging into a function, we can call it when the page unloads.

let logVisit = function() 
  // Test that we have support
  if (!navigator.sendBeacon) return true;
      
  // URL to send the data to, e.g.
  let url = '/api/log-visit';
      
  // Data to send
  let data = new FormData();
  data.append('start', startTime);
  data.append('end', performance.now());
  data.append('url', document.URL);
      
  // Let's go!
  navigator.sendBeacon(url, data);
;

Finally, we need to call this function when the user leaves the page. My first instinct was to use the unload event, but Safari on a Mac seems to block the request with a security warning, so beforeunload works just fine for us here.

window.addEventListener('beforeunload', logVisit);

When the page unloads (or, just before it does) our logVisit() function will be called and provided the browser supports the Beacon API our beacon will be sent.

(Note that if there is no Beacon support, we return true and pretend it all worked great. Returning false would cancel the event and stop the page unloading. That would be unfortunate.)

Considerations When Tracking

As so many of the potential uses for Beacon revolve around tracking of activity, I think it would be remiss not to mention the social and legal responsibilities we have as developers when logging and tracking activity that could be tied back to users.

GDPR

We may think of the recent European GDPR laws as they related to email, but of course, the legislation relates to storing any type of personal data. If you know who your users are and can identify their sessions, then you should check what activity you are logging and how it relates to your stated policies.

Often we don’t need to track as much data as our instincts as developers tell us we should. It can be better to deliberately not store information that would identify a user, and then you reduce your likelihood of getting things wrong.

DNT: Do Not Track

In addition to legal requirements, most browsers have a setting to enable the user to express a desire not to be tracked. Do Not Track sends an HTTP header with the request that looks like this:

DNT: 1

If you’re logging data that can track a specific user and the user sends a positive DNT header, then it would be best to follow the user’s wishes and anonymize that data or not track it at all.

In PHP, for example, you can very easily test for this header like so:

if (!empty($_SERVER['HTTP_DNT']))  
  // User does not wish to be tracked ... 

In Conclusion

The Beacon API is a really useful way to send data from a page back to the server, particularly in a logging context. Browser support is very broad, and it enables you to seamlessly log data without negatively impacting the user’s browsing experience and the performance of your site. The non-blocking nature of the requests means that the performance is much faster than alternatives such as XHR and Fetch.

If you’d like to read more about the Beacon API, the following sites are worth a look.

Smashing Editorial
(ra, il)


More:  

Logging Activity With The Web Beacon API

Intro to Vue.js

Full-day workshop — June 25th
Vue.js brings together the best features of the javascript framework landscape elegantly. If you’re interested in writing maintainable, clean code in an exciting and expressive manner, you should consider joining this class.
We’ll run through all the ways that Vue quickly solves common Frontend developer implementations, as well flexible ways to access the underlying API so that you can solve for any use-case.
Vue is uniquely friendly to all levels of web developers.

View original: 

Intro to Vue.js

Thumbnail

A Conference Without Slides: Meet SmashingConf Toronto 2018 (June 26-27)




A Conference Without Slides: Meet SmashingConf Toronto 2018 (June 26-27)

Vitaly Friedman



What would be the best way to learn and improve your skills? By looking designers and developers over their shoulder! At SmashingConf Toronto taking place on June 26–27, we will exactly do that. All talks will be live coding and design sessions on stage, showing how speakers such as Dan Mall, Lea Verou, Rachel Andrew, and Sara Soueidan design und build stuff — including pattern libraries setup, design workflows and shortcuts, debugging, naming conventions, and everything in between. To the tickets →

Smashingconf Toronto 2018
What if there was a web conference without… slides? Meet SmashingConf Toronto 2018 with live sessions exploring how experts think, design, build and debug.

The night before the conference we’ll be hosting a FailNight, a warm-up party with a twist — every single session will be highlighting how we all failed on a small or big scale. Because, well, it’s mistakes that help us get better and smarter, right?

Speakers

One track, two conference days (June 26–27), 16 speakers, and just 500 available seats. We’ll cover everything from efficient design workflow and getting started with Vue.js to improving eCommerce UX and production-ready CSS Grid Layouts. Also on our list: performance audits, data visualization, multi-cultural designs, and other fields that may come up in your day-to-day work.

Here’s what you should be expecting:

Dan Mall and Lea Verou are two of the speakers at SmashingConf Toronto
That’s quite a speakers line-up, with topics ranging from live CSS/JavaScript coding to live lettering.

Conference Tickets

C$699Get Your Ticket

Two days of great speakers and networking
Check all speakers →

Conf + Workshop Tickets

Save C$100 Conf + Workshop

Three days full of learning and networking
Check all workshops →

Workshops At SmashingConf Toronto

On the two days before and after the conference, you have the chance to dive deep into a topic of your choice. Tickets for the full-day workshops cost C$599. If you combine it with a conference ticket, you’ll save C$100 on the regular workshop price. Seats are limited.

Workshops on Monday, June 25th

Sara Soueidan on The CSS & SVG Power Combo
Sara SoueidanThe workshop with the strongest punch of creativity. The CSS & SVG Power Combo is where you will learn about the latest, cutting-edge CSS and SVG techniques to create creative crisp and beautiful interfaces. We will also be looking at any existing browser inconsistencies as well as performance considerations to keep in mind. And there will be lots of exercises and practical examples that can be taken and directly applied in real life projects.Read more…

Sarah Drasner on Intro To Vue.js
Sarah DrasnerVue.js brings together the best features of the Javascript framework landscape elegantly. If you’re interested in writing maintainable, clean code in an exciting and expressive manner, you should consider joining this class. Read more…

Tim Kadlec on Demystifying Front-End Security
Tim KadlecWhen users come to your site, they trust you to provide them with a good experience. They expect a site that loads quickly, that works in their browser, and that is well designed. And though they may not vocalize it, they certainly expect that the experience will be safe: that any information they provide will not be stolen or used in ways they did not expect. Read more…

Aaron Draplin on Behind The Scenes With The DDC
Aaron DraplinGo behind the scenes with the DDC and learn about Aaron’s process for creating marks, logos and more. Each student will attack a logo on their own with guidance from Aaron. Could be something you are currently working on, or have always wanted to make. Read more…

Dan Mall on Design Workflow For A Multi-Device World
Dan MallIn this workshop, Dan will share insights into his tools and techniques for integrating design thinking into your product development process. You’ll learn how to craft powerful design approaches through collaborative brainstorming techniques and how to involve your entire team in the design process. Read more…

Vitaly Friedman on Smart Responsive UX Design Patterns
Vitaly FriedmanIn this workshop, Vitaly Friedman, co-founder of Smashing Magazine, will cover practical techniques, clever tricks and useful strategies you need to be aware of when working on responsive websites. From responsive modules to clever navigation patterns and web form design techniques; the workshop will provide you with everything you need to know today to start designing better responsive experiences tomorrow. Read more…

Workshops on Thursday, June 28th

Eva-Lotta Lamm on Sketching With Confidence, Clarity And Imagination
Eva-Lotta LammBeing able to sketch is like speaking an additional language that enables you to structure and express your thoughts and ideas more clearly, quickly and in an engaging way. For anyone working in UX, design, marketing and product development in general, sketching is a valuable technique to feel comfortable with. Read more…

Nadieh Bremer on Creative Data Visualization Techniques
Nadieh BremerWith so many tools available to visualize your data, it’s easy to get stuck in thinking about chart types, always just going for that bar or line chart, without truly thinking about effectiveness. In this workshop, Nadieh will teach you how you can take a more creative and practical approach to the design of data visualization. Read more…

Rachel Andrew on Advanced CSS Layouts With Flexbox And CSS Grid
This workshop is designed for designers and developers who already have a good working knowledge of HTML and CSS. We will cover a range of CSS methods for achieving layout, from those you are safe to use right now even if you need to support older version of Internet Explorer through to things that while still classed as experimental, are likely to ship in browsers in the coming months. Read more…

Joe Leech on Psychology For UX And Product Design
Joe LeechThis workshop will provide you with a practical, hands-on way to understand how the human brain works and apply that knowledge to User Experience and product design. Learn the psychological principles behind how our brain makes sense of the world and apply that to product and user interface design. Read more…

Seb Lee-Delisle on Javascript Graphics And Animation
Seb Lee-DelisleIn this workshop, Seb will demonstrate a variety of beautiful visual effects using JavaScript and HTML5 canvas. You will learn animation and graphics techniques that you can use to add a sense of dynamism to your projects. Read more…

Vitaly Friedman on New Front-End Adventures In Responsive Design
Vitaly FriedmanWith HTTP/2, Service Workers, Responsive Images, Flexbox, CSS Grid, SVG, WAI-ARIA roles and Font Loading API now available in browsers, we all are still trying to figure out just the right strategy for designing and building responsive websites efficiently. We want to use all of these technologies and smart processes like atomic design, but how can we use them efficiently, and how do we achieve it within a reasonable amount of time? Read more…

Conference Tickets

C$699Get Your Ticket

Two days of great speakers and networking
Check all speakers →

Conf + Workshop Tickets

Save C$100 Conf + Workshop

Three days full of learning and networking
Check all workshops →

Location

Maybe you’ve already wondered why our friend the Smashing Cat has dressed up as a movie director for SmashingConf Toronto? Well, that’s because our conference venue will be the TIFF Bell Lightbox. Located within the center of Toronto, it is one of the most iconic cinemas in the world and also the location where the Toronto Film Festival takes place. We’re thrilled to be hosted there!

TIFF Bell Lightbox, our venue in Toronto
The TIFF Bell Lightbox, usually a cinema, is the perfect place for thrillers and happy endings as the web writes them.

Why This Conference Could Be For You

SmashingConfs are a friendly and intimate experience. It’s like meeting good friends and making new ones. Friends who share their stories, ideas, and, of course, their best tips and tricks. At SmashingConf Toronto you will learn how to:

  1. Make full advantage of CSS Variables,
  2. Create fluid animation effects with Vue,
  3. Detect and resolve accessibility issues,
  4. Structure components in a pattern library when using CSS Grid,
  5. Build a stable, usable online experience,
  6. Design for cross-cultural audiences,
  7. Create effective and beautiful data visualization from scratch,
  8. Transform your designs with psychology,
  9. Help your design advance with proper etiquette,
  10. Sketch with pen and paper,
  11. … and a lot more.

Download “Convience Your Boss” PDF

We know that sometimes companies encourage their staff to attend a different conference each year. Well, we say; once you’ve found a conference you love, why stray…

Think your boss needs a little more persuasion? We’ve prepared a neat Convince Your Boss PDF that you can use to tip the scales in your favor to send you to the event.

Diversity and Inclusivity

We care about diversity and inclusivity at our events. SmashingConf’s are a safe, friendly place. We don’t tolerate any disrespect or misbehavior. We also provide student and diversity tickets.

Conference Tickets

C$699Get Your Ticket

Two days of great speakers and networking
Check all speakers →

Conf + Workshop Tickets

Save C$100 Conf + Workshop

Three days full of learning and networking
Check all workshops →

See You In Toronto!

We’d love to meet you in Toronto and spend two memorable days full of web goodness, lots of learning, and friendly people with you. An experience you won’t forget so soon. Promised.

Smashing Editorial
(cm)


Link to article:  

A Conference Without Slides: Meet SmashingConf Toronto 2018 (June 26-27)

Lazy Loading JavaScript Modules With ConditionerJS

Linking JavaScript functionality to the DOM can be a repetitive and tedious task. You add a class to an element, find all the elements on the page, and attach the matching JavaScript functionality to the element. Conditioner is here to not only take this work of your hands but supercharge it as well!

In this article, we’ll look at the JavaScript initialization logic that is often used to link UI components to a webpage. Step-by-step we’ll improve this logic, and finally, we’ll make a 1 Kilobyte jump to replacing it with Conditioner. Then we’ll explore some practical examples and code snippets and see how Conditioner can help make our websites more flexible and user-oriented.

Conditioner And Progressive Enhancement Sitting In A Tree

Before we proceed, I need to get one thing across:

Conditioner is not a framework for building web apps.

Instead, it’s aimed at websites. The distinction between websites and web apps is useful for the continuation of this story. Let me explain how I view the overall difference between the two.

Websites are mostly created from a content viewpoint; they are there to present content to the user. The HTML is written to semantically describe the content. CSS is added to nicely present the content across multiple viewports. The last and third act is to carefully layer JavaScript on top to add that extra zing to the user experience. Think of a date picker, navigation, scroll animations, or carousels (pardon my French).

Examples of content-oriented websites are for instance: Wikipedia, Smashing Magazine, your local municipality website, newspapers, and webshops. Web apps are often found in the utility area, think of web-based email clients and online maps. While also presenting content, the focus of web apps is often more on interacting with content than presenting content. There’s a huge grey area between the two, but this contrast will help us decide when Conditioner might be effective and when we should steer clear.

As stated earlier, Conditioner is all about websites, and it’s specifically built to deal with that third act:

Enhancing the presentation layer with JavaScript functionality to offer an improved user experience.

The Troublesome Third Act

The third act is about enhancing the user experience with that zingy JavaScript layer.

Judging from experience and what I’ve seen online, JavaScript functionality is often added to websites like this:

  1. A class is added to an HTML element.
  2. The querySelectorAll method is used to get all elements assigned the class.
  3. A for-loop traverses the NodeList returned in step 2.
  4. A JavaScript function is called for each item in the list.

Let’s quickly put this workflow in code by adding autocomplete functionality to an input field. We’ll create a file called autocomplete.js and add it to the page using a <script> tag.

function createAutocomplete(element) 
  // our autocomplete logic
  // ...
<input type="text" class="autocomplete"/>

<script src="autocomplete.js"></script>

<script>
var inputs = document.querySelectorAll('.autocomplete');

for (var i = 0; i < inputs.length; i++) 
  createAutocomplete(inputs[i]);

</script>

Go to demo →

That’s our starting point.

Suppose we’re now told to add another functionality to the page, say a date picker, it’s initialization will most likely follow the same pattern. Now we’ve got two for-loops. Add another functionality, and you’ve got three, and so on and so on. Not the best.

While this works and keeps you off the street, it creates a host of problems. We’ll have to add a loop to our initialization script for each functionality we add. For each loop we add, the initialization script gets linked ever tighter to the document structure of our website. Often the initialization script will be loaded on each page. Meaning all the querySelectorAll calls for all the different functionalities will be run on each and every page whether functionality is defined on the page or not.

For me, this setup never felt quite right. It always started out “okay,” but then it would slowly grow to a long list of repetitive for-loops. Depending on the project it might contain some conditional logic here and there to determine if something loads on a certain viewport or not.

if (window.innerWidth <= 480) 
  // small viewport for-loops here

Eventually, my initialization script would always grow out of control and turn into a giant pile of spaghetti code that I would not wish on anyone.

Something needed to be done.

Soul Searching

I am a huge proponent of carefully separating the three web dev layers HTML, CSS, and JavaScript. HTML shouldn’t have a rigid relationship with JavaScript, so no use of inline onclick attributes. The same goes for CSS, so no inline style attributes. Adding classes to HTML elements and then later searching for them in my beloved for-loops followed that philosophy nicely.

That stack of spaghetti loops though, I wanted to get rid them so badly.

I remember stumbling upon an article about using data attributes instead of classes, and how those could be used to link up JavaScript functionality (I’m not sure it was this article, but it seems to be from right timeframe). I didn’t like it, misunderstood it, and my initial thought was that this was just covering up for onclick, this mixed HTML and JavaScript, no way I was going to be lured to the dark side, I don’t want anything to do with it. Close tab.

Some weeks later I would return to this and found that linking JavaScript functionality using data attributes was still in line with having separate layers for HTML and JavaScript. As it turned out, the author of the article handed me a solution to my ever-growing initialization problem.

We’ll quickly update our script to use data attributes instead of classes.

<input type="text" data-module="autocomplete">

<script src="autocomplete.js"></script>

<script>
var inputs = document.querySelectorAll('[data-module=autocomplete]');

for (var i = 0; i < inputs.length; i++) 
  createAutocomplete(inputs[i]);

</script>

Go to demo →

Done!

But hang on, this is nearly the same setup; we’ve only replaced .autocomplete with [data-module=autocomplete]. How’s that any better? It’s not, you’re right. If we add an additional functionality to the page, we still have to duplicate our for-loop — blast! Don’t be sad though as this is the stepping stone to our killer for-loop.

Watch what happens when we make a couple of adjustments.

<input type="text" data-module="createAutocomplete">

<script src="autocomplete.js"></script>

<script>
var elements = document.querySelectorAll('[data-module]');

for (var i = 0; i < elements.length; i++) 
    var name = elements[i].getAttribute('data-module');
    var factory = window[name];
    factory(elements[i]);

</script>

Go to demo →

Now we can load any functionality with a single for-loop.

  1. Find all elements on the page with a data-module attribute;
  2. Loop over the node list;
  3. Get the name of the module from the data-module attribute;
  4. Store a reference to the JavaScript function in factory;
  5. Call the factory JavaScript function and pass the element.

Since we’ve now made the name of the module dynamic, we no longer have to add any additional initialization loops to our script. This is all we need to link any JavaScript functionality to an HTML element.

This basic setup has some other advantages as well:

  • The init script no longer needs to know what it loads; it just needs to be very good at this one little trick.
  • There’s now a convention for linking functionality to the DOM; this makes it very easy to tell which parts of the HTML will be enhanced with JavaScript.
  • The init script does not search for modules that are not there, i.e. no wasted DOM searches.
  • The init script is done. No more adjustments are needed. When we add functionality to the page, it will automatically be found and will simply work.

Wonderful!

So What About This Thing Called Conditioner?

We finally have our single loop, our one loop to rule all other loops, our king of loops, our hyper-loop. Ehm. Okay. We’ll just have to conclude that our is a loop of high quality and is so flexible that it can be re-used in each project (there’s not really anything project specific about it). That does not immediately make it library-worthy, it’s still quite a basic loop. However, we’ll find that our loop will require some additional trickery to really cover all our use-cases.

Let’s explore.

With the one loop, we are now loading our functionality automatically.

  1. We assign a data-module attribute to an element.
  2. We add a <script> tag to the page referencing our functionality.
  3. The loop matches the right functionality to each element.
  4. Boom!

Let’s take a look at what we need to add to our loop to make it a bit more flexible and re-usable. Because as it is now, while amazing, we’re going to run into trouble.

  • It would be handy if we moved the global functions to isolated modules. This prevents pollution of the global scope. Makes our modules more portable to other projects. And we’ll no longer have to add our <script> tags manually. Fewer things to add to the page, fewer things to maintain.
  • When using our portable modules across multiple projects (and/or pages) we’ll probably encounter a situation where we need to pass configuration options to a module. Think API keys, labels, animation speeds. That’s a bit difficult at the moment as we can’t access the for-loop.
  • With the ever-growing diversity of devices out there we will eventually encounter a situation where we only want to load a module in a certain context. For instance, a menu that needs to be collapsed on small viewports. We don’t want to add if-statements to our loop. It’s beautiful as it is, we will not add if statements to our for-loop. Never.

That’s where Conditioner can help out. It encompasses all above functionality. On top of that, it exposes a plugin API so we can configure and expand Conditioner to exactly fit our project setup.

Let’s make that 1 Kilobyte jump and replace our initialization loop with Conditioner.

Switching To Conditioner

We can get the Conditioner library from the GitHub repository, npm or from unpkg. For the rest of the article, we’ll assume the Conditioner script file has been added to the page.

The fastest way is to add the unpkg version.

<script src="https://unpkg.com/conditioner-core/conditioner-core.js"></script>

With Conditioner added to the page lets take a moment of silence and say farewell to our killer for-loop.

Conditioners default behavior is exactly the same as our now departed for-loop. It’ll search for elements with the data-module attribute and link them to globally scoped JavaScript functions.

We can start this process by calling the conditioner hydrate method.

<input type="text" data-module="createAutocomplete"/>

<script src="autocomplete.js"></script>

<script>
conditioner.hydrate(document.documentElement);
</script>

Go to demo →

Note that we pass the documentElement to the hydrate method. This tells Conditioner to search the subtree of the <html> element for elements with the data-module attribute.

It basically does this:

document.documentElement.querySelectorAll('[data-module]');

Okay, great! We’re set to take it to the next level. Let’s try to replace our globally scoped JavaScript functions with modules. Modules are reusable pieces of JavaScript that expose certain functionality for use in your scripts.

Moving From Global Functions To Modules

In this article, our modules will follow the new ES Module standard, but the examples will also work with modules based on the Universal Module Definition or UMD.

Step one is turning the createAutocomplete function into a module. Let’s create a file called autocomplete.js. We’ll add a single function to this file and make it the default export.

export default function(element) 
  // autocomplete logic
  // ...

It’s the same as our original function, only prepended with export default.

For the other code snippets, we’ll switch from our classic function to arrow functions.

export default element => 
  // autocomplete logic
  // ...

We can now import our autocomplete.js module and use the exported function like this:

import('./autocomplete.js').then(module => 
  // the autocomplete function is located in module.default
);

Note that this only works in browsers that support Dynamic import(). At the time of this writing that would be Chrome 63 and Safari 11.

Okay, so we now know how to create and import modules, our next step is to tell Conditioner to do the same.

We update the data-module attribute to ./autocomplete.js so it matches our module file name and relative path.

Remember: The import() method requires a path relative to the current module. If we don’t prepend the autocomplete.js filename with ./ the browser won’t be able to find the module.

Conditioner is still busy searching for functions on the global scope. Let’s tell it to dynamically load ES Modules instead. We can do this by overriding the moduleImport action.

We also need to tell it where to find the constructor function (module.default) on the imported module. We can point Conditioner in the right direction by overriding the moduleGetConstructor action.

<input type="text" data-module="./autocomplete.js"/>

<script>
conditioner.addPlugin(
  // fetch module with dynamic import
  moduleImport: (name) => import(name),
  
  // get the module constructor
  moduleGetConstructor: (module) => module.default
);

conditioner.hydrate(document.documentElement);
</script>

Go to demo →

Done!

Conditioner will now automatically lazy load ./autocomplete.js, and once received, it will call the module.default function and pass the element as a parameter.

Defining our autocomplete as ./autocomplete.js is very verbose. It’s difficult to read, and when adding multiple modules on the page, it quickly becomes tedious to write and error prone.

This can be remedied by overriding the moduleSetName action. Conditioner views the data-module value as an alias and will only use the value returned by moduleSetName as the actual module name. Let’s automatically add the js extension and relative path prefix to make our lives a bit easier.

<input type="text" data-module="autocomplete"/>
conditioner.addPlugin(
  // converts module aliases to paths
  moduleSetName: (name) => `./$ name .js`
});

Go to demo →

Now we can set data-module to autocomplete instead of ./autocomplete.js, much better.

That’s it! We’re done! We’ve setup Conditioner to load ES Modules. Adding modules to a page is now as easy as creating a module file and adding a data-module attribute.

The plugin architecture makes Conditioner super flexible. Because of this flexibility, it can be modified for use with a wide range of module loaders and bundlers. There’s bootstrap projects available for Webpack, Browserify and RequireJS.

Please note that Conditioner does not handle module bundling. You’ll have to configure your bundler to find the right balance between serving a bundled file containing all modules or a separate file for each module. I usually cherry pick tiny modules and core UI modules (like navigation) and serve them in a bundled file while conditionally loading all scripts further down the page.

Alright, module loading — check! It’s now time to figure out how to pass configuration options to our modules. We can’t access our loop; also we don’t really want to, so we need to figure out how to pass parameters to the constructor functions of our modules.

Passing Configuration Options To Our Modules

I might have bent the truth a little bit. Conditioner has no out-of-the-box solution for passing options to modules. There I said it. To keep Conditioner as tiny as possible I decided to strip it and make it available through the plugin API. We’ll explore some other options of passing variables to modules and then use the plugin API to set up an automatic solution.

The easiest and at the same time most banal way to create options that our modules can access is to define options on the global window scope.

window.autocompleteSource = './api/query';
export default (element) => 
  console.log(window.autocompleteSource);
  // will log './api/query'
  
  // autocomplete logic
  // ...

Don’t do this.

It’s better to simply add additional data attributes.

<input type="text" 
       data-module="autocomplete" 
       data-source="./api/query"/>

These attributes can then be accessed inside our module by accessing the element dataset which returns a DOMStringMap of all data attributes.

export default (element) => 
  console.log(element.dataset.source);
  // will log './api/query'
  
  // autocomplete logic
  // ...

This could result in a bit of repetition as we’ll be accessing element.dataset in each module. If repetition is not your thing, read on, we’ll fix it right away.

We can automate this by extracting the dataset and injecting it as an options parameter when mounting the module. Let’s override the moduleSetConstructorArguments action.

conditioner.addPlugin(

  // the name of the module and the element it's being mounted to
  moduleSetConstructorArguments: (name, element) => ([
    element, 
    element.dataset
  ])
  
);

The moduleSetConstructorArguments action returns an array of parameters which will automatically be passed to the module constructor.

export default (element, options) => 
  console.log(options.source);
  // will log './api/query'
  
  // autocomplete logic
  // ...

We’ve only eliminated the dataset call, i.e. seven characters. Not the biggest improvement, but we’ve opened the door to take this a bit further.

Suppose we have multiple autocomplete modules on the page, and each and every single one of them requires the same API key. It would be handy if that API key was supplied automagically instead of having to add it as a data attribute on each element.

We can improve our developer lives by adding a page level configuration object.

const pageOptions = 
  // the module alias
  autocomplete: 
    key: 'abc123' // api key
  
}

conditioner.addPlugin(

  // the name of the module and the element it's being mounted to
  moduleSetConstructorArguments: (name, element) => ([
    element, 
    // merge the default page options with the options set on the element it self
    Object.assign(, 
      defaultOptions[element.dataset.module], 
      element.dataset
    )
  ])
  
});

Go to demo →

As our pageOptions variable has been defined with const it’ll be block-scoped, which means it won’t pollute the global scope. Nice.

Using Object.assign we merge an empty object with both the pageOptions for this module and the dataset DOMStringMap found on the element. This will result in an options object containing both the source property and the key property. Should one of the autocomplete elements on the page have a data-key attribute, it will override the pageOptions default key for that element.

const ourOptions = Object.assign(
  {}, 
   key: 'abc123' , 
   source: './api/query' 
);

console.log(ourOptions);
// output:   key: 'abc123', source: './api/query' 

That’s some top-notch developer convenience right there.

By having added this tiny plugin, we can automatically pass options to our modules. This makes our modules more flexible and therefore re-usable over multiple projects. We can still choose to opt-out and use dataset or globally scope our configuration variables (no, don’t), whatever fits best.

Our next challenge is the conditional loading of modules. It’s actually the reason why Conditioner is named Conditioner. Welcome to the inner circle!

Conditionally Loading Modules Based On User Context

Back in 2005, desktop computers were all the rage, everyone had one, and everyone browsed the web with it. Screen resolutions ranged from big to bigger. And while users could scale down their browser windows, we looked the other way and basked in the glory of our beautiful fixed-width sites.

I’ve rendered an artist impression of the 2005 viewport:


A rectangular area illustrating a single viewport size of 1024 pixels by 768 pixels



The 2005 viewport in its full glory, 1024 pixels wide, and 768 pixels high. Wonderful.

Today, a little over ten years later, there’s more people browsing the web on mobile than on desktop, resulting in lots of different viewports.

I’ve applied this knowledge to our artist impression below.


Multiple overlapping rectangles illustrating a high amount of different viewport sizes


More viewports than you can shake a stick at.

Holy smokes! That’s a lot of viewports.

Today, someone might visit your site on a small mobile device connected to a crazy fast WiFi hotspot, while another user might access your site using a desktop computer on a slow tethered connection. Yes, I switched up the connection speeds — reality is unpredictable.

And to think we were worried about users resizing their browser window. Hah!

Note that those million viewports are not set in stone. A user might load a website in portrait orientation and then rotate the device, (or, resize the browser window), all without reloading the page. Our websites should be able to handle this and load or unload functionality accordingly.

Someone on a tiny device should not receive the same JavaScript package as someone on a desktop device. That seems hardly fair; it’ll most likely result in a sub-optimal user experience on both the tiny mobile device and the good ol’ desktop device.

With Conditioner in place, let’s configure it as a gatekeeper and have it load modules based on the current user context. The user context contains information about the environment in which the user is interacting with your functionality. Some examples of environment variables influencing context are viewport size, time of day, location, and battery level. The user can also supply you with context hints, for instance, a preference for reduced motion. How a user behaves on your platform will also tell you something about the context she might be in, is this a recurring visit, how long is the current user session?

The better we’re able to measure these environment variables the better we can enhance our interface to be appropriate for the context the user is in.

We’ll need an attribute to describe our modules context requirements so Conditioner can determine the right moment for the module to load and to unload. We’ll call this attribute data-context. It’s pretty straightforward.

Let’s leave our lovely autocomplete module behind and shift focus to a new module. Our new section-toggle module will be used to hide the main navigation behind a toggle button on small viewports.

Since it should be possible for our section-toggle to be unloaded, the default function returns another function. Conditioner will call this function when it unloads the module.

export default (element) => 
  // sectionToggle logic
  // ...

  return () => 
    // sectionToggle unload logic
    // ...
  
}

We don’t need the toggle behavior on big viewports as those have plenty of space for our menu (it’s a tiny menu). We only want to collapse our menu on viewports more narrow than 30em (this translates to 480px).

Let’s setup the HTML.

<nav>
  <h1 data-module="sectionToggle" 
      data-context="@media (max-width:30em)">
      Navigation
  </h1>
  <ul>
    <li><a href="/home">home</a></li>
    <li><a href="/about">about</a></li>
    <li><a href="/contact">contact</a></li>
  </ul>
</nav>

Go to demo →

The data-context attribute will trigger Conditioner to automatically load a context monitor observing the media query (max-width:30em). When the user context matches this media query, it will load the module; when it does not, or no longer does, it will unload the module.

Monitoring happens based on events. This means that after the page has loaded, should the user resize the viewport or rotate the device, the user context is re-evaluated and the module is loaded or unloaded based on the new observations.

You can view monitoring as feature detection. Where feature detection is about an on/off situation, the browser either supports WebGL, or it doesn’t. Context monitoring is a continuous process, the initial state is observed at page load, but monitoring continues after. While the user is navigating the page, the context is monitored, and observations can influence page state in real-time.

This nonstop monitoring is important as it allows us to adapt to context changes immediately (without page reload) and optimizes our JavaScript layer to fit each new user context like a glove.

The media query monitor is the only monitor that is available by default. Adding your own custom monitors is possible using the plugin API. Let’s add a visible monitor which we’ll use to determine if an element is visible to the user (scrolled into view). To do this, we’ll use the brand new IntersectionObserver API.

conditioner.addPlugin(
  // the monitor hook expects a configuration object
  monitor: 
    // the name of our monitor with the '@'
    name: 'visible',

    // the create method will return our monitor API
    create: (context, element) => (

      // current match state
      matches: false,

      // called by conditioner to start listening for changes
      addListener (change) 

        new IntersectionObserver(entries => 

          // update the matches state
          this.matches = entries.pop().isIntersecting == context;

          // inform Conditioner of the state change
          change();

        ).observe(element);

      }
    })
  }
});

We now have a visible monitor at our disposal.

Let’s use this monitor to only load images when they are scrolled in to view.

Our base image HTML will be a link to the image. When JavaScript fails to load the links will still work, and the contents of the link will describe the image. This is progressive enhancement at work.

<a href="cat-nom.jpg" 
   data-module="lazyImage" 
   data-context="@visible">
   A red cat eating a yellow bird
</a>

Go to demo →

The lazyImage module will extract the link text, create an image element, and set the link text to the alt text of the image.

export default (element) => 

  // store original link text
  const text = element.textContent;

  // replace element text with image
  const image = new Image();
  image.src = element.href;
  image.setAttribute('alt', text);
  element.replaceChild(image, element.firstChild);
  
  return () => 
    // restore original element state
    element.innerHTML = text
  
}

When the anchor is scrolled into view, the link text is replaced with an img tag.

Because we’ve returned an unload function the image will be removed when the element scrolls out of view. This is most likely not what we desire.

We can remedy this behavior by adding the was operator. It will tell Conditioner to retain the first matched state.

<a href="cat-nom.jpg" 
   data-module="lazyImage" 
   data-context="was @visible">
   A red cat eating a yellow bird
</a>

There are three other operators at our disposal.

The not operator lets us invert a monitor result. Instead of writing @visible false we can write not @visible which makes for a more natural and relaxed reading experience.

Last but not least, we can use the or and and operators to string monitors together and form complex context requirements. Using and combined with or we can do lazy image loading on small viewports and load all images at once on big viewports.

<a href="cat-nom.jpg" 
   data-module="lazyImage" 
   data-context="was @visible and @media (max-width:30em) or @media (min-width:30em)">
   A red cat eating a yellow bird
</a>

We’ve looked at the @media monitor and have added our custom @visible monitor. There are lots of other contexts to measure and custom monitors to build:

  • Tap into the Geolocation API and monitor the location of the user @location (near: 51.4, 5.4) to maybe load different scripts when a user is near a certain location.
  • Imagine a @time monitor, which would make it possible to enhance a page dynamically based on the time of day @time (after 20:00).
  • Use the Device Light API to determine the light level @lightlevel (max-lumen: 50) at the location of the user. Which, combined with the time, could be used to perfectly tune page colors.

By moving context monitoring outside of our modules, our modules have become even more portable. If we need to add collapsible sections to one of our pages, it’s now easy to re-use our section toggle module, because it’s not aware of the context in which it’s used. It just wants to be in charge of toggling something.

And this is what Conditioner makes possible, it extracts all distractions from the module and allows you to write a module focused on a single task.

Using Conditioner In JavaScript

Conditioner exposes a total of three methods. We’ve already encountered the hydrate and addPlugin methods. Let’s now have a look at the monitor method.

The monitor method lets us manually monitor a context and receive context updates.

const monitor = conditioner.monitor('@media (min-width:30em)');
monitor.onchange = (matches) => 
  // called when a change to the context was observed
;
monitor.start();

This method makes it possible to do context monitoring from JavaScript without requiring the DOM starting point. This makes it easier to combine Conditioner with frameworks like React, Angular or Vue to help with context monitoring.

As a quick example, I’ve built a React <ContextRouter> component that uses Conditioner to monitor user context queries and switch between views. It’s heavily inspired by React Router so might look familiar.

<ContextRouter>
    <Context query="@media (min-width:30em)" 
             component= FancyInfoGraphic />
    <Context>
        // fallback to use on smaller viewports
        <table/>
    </Context>
</ContextRouter>

I hope someone out there is itching to convert this to Angular. As a cat and React person I just can’t get myself to do it.

Conclusion

Replacing our initialization script with the killer for loop created a single entity in charge of loading modules. From that change, automatically followed a set of requirements. We used Conditioner to fulfill these requirements and then wrote custom plugins to extend Conditioner where it didn’t fit our needs.

Not having access to our single for loop, steered us towards writing more re-usable and flexible modules. By switching to dynamic imports we could then lazy load these modules, and later load them conditionally by combining the lazy loading with context monitoring.

With conditional loading, we can quickly determine when to send which module over the connection, and by building advanced context monitors and queries, we can target more specific contexts for enhancement.

By combining all these tiny changes, we can speed up page load time and more closely match our functionality to each different context. This will result in improved user experience and as a bonus improve our developer experience as well.

Smashing Editorial
(rb, ra, hj, il)

Continue reading: 

Lazy Loading JavaScript Modules With ConditionerJS

Debugging JavaScript With A Real Debugger You Did Not Know You Already Have

console.log can tell you a lot about your app, but it can’t truly debug your code. For that, you need a full-fledged JavaScript debugger. The new Firefox JavaScript debugger can help you write fast, bug-free code. Here’s how it works.
In this example, we’ll crack open a very simple to-do app with Debugger. Here’s the app, based on basic open-source JavaScript frameworks. Open it up in the latest version of Firefox Developer Edition and then launch debugger.

Link – 

Debugging JavaScript With A Real Debugger You Did Not Know You Already Have

How to Turn a Long Landing Page Into a Microsite – In 5 Easy Steps

Landing pages can get really long, which is totally fine, especially if you use a sticky anchor navigation to scroll people up and down to different page sections. It’s a great conversion experience and should be embraced.

However, there are times when having a small multi-page site, known as a microsite (or mini-site) can offer significant advantages.

This is not a conversation about your website (which is purely for organic traffic), I’m still talking about creating dedicated marketing-campaign-specific experiences. That’s what landing pages were designed for, and a microsite is very similar. It’s like a landing page in that it’s a standalone, controlled experience, but with a different architecture.

The sketch below shows the difference between a landing page and a microsite.

The landing page is a single page with six sections. The microsite has a homepage and 5 or 6 child pages, each with a persistent global navigation to conect the pages.

They are both “landing experiences”, just architected differently. I’ve noticed that many higher education landing experiences are four-page microsites. The pharmaceutical industry tends to create microsites for every new product campaign – especially those driven by TV ads.

What are the benefits of a microsite over a long landing page?

To reiterate, for most marketing campaign use cases, a single landing page – long or short – is your absolute best option. But there are some scenarios where you can really benefit from a microsite.

Some of the benefits of a microsite include:

  1. It allows more pages to be indexed by Google
  2. You can craft a controlled experience on each page (vs. a section where people can move up and down to other sections)
  3. You can add a lot more content to a certain page, without making your landing page a giant.
  4. You can get more advanced with your analytics research as there are many different click-pathways within a microsite that aren’t possible to track or design for on a single page.
  5. The technique I’m going to show you takes an Unbounce landing page, turns it into a 5-page microsite.

How to Create a Microsite from a Long Landing Page

The connective tissue of a microsite is the navigation. It links the pages together and defines the available options for a visitor. I’ll be using an Unbounce Sticky Bar as the shared global navigation to connect five Unbounce landing pages that we’ll create from the single long landing page. It’s really easy.

First, Choose a Landing Page to Work With

I’ve created a dummy landing page to work with. You can see from the zoomed-out thumbnail on the right-hand side how long it is: 10 page-sections long to be specific. (Click the image to view the whole page in a scrolling lightbox.)

The five-step process is then as follows:

I’ll explain it in more detail with screenshots and a quick video.

  1. Create the microsite pages, by duplicate your landing page 5 times
  2. Delete the page sections you don’t want on each microsite page
  3. Create a Sticky Bar and add five navigation buttons
  4. Set the URL targeting of the Sticky Bar to appear on the microsite pages
  5. Add the Unbounce global script to your site
  6. Click “Publish” << hardly a step.

Step 1: Create Your Microsite Pages

Choose “Duplicate Page” from the cog menu on your original landing page to create a new page (5 times). Then name each page and set the URL of each accordingly. In the screenshot below you can see I have the original landing page, and five microsite pages Home|About|Features|FAQ|Sign Up.

Step 2: Delete Page Sections on Each Microsite Page

Open each page in the Unbounce builder and click the background of any page section you don’t want and hit delete. It’s really quick. Do this for each page until they only have the content you want to be left in them. Watch the 30 sec video below to see how.

Pro Tip: Copy/Paste Between Pages

There is another way to do it. Instead of deleting sections, you can start with blank pages for the microsite, and copy/paste the sections you want from the landing page into the blank pages. This is one of the least-known and most powerful features of Unbounce.

The best way is to have a few browser tabs open at once (like one for each page), then just copy and paste between browser tabs. It’s epic! Watch…

Step 3: Create the Navigation With a Sticky Bar

Create a new Sticky Bar in Unbounce (it’s the same builder for landing pages and popups). Add buttons or links for each of your microsite pages, and set the “Target” of the link to be “Parent Frame” as shown in the lower-right of this screenshot.

Step 4: Set URL Targeting

This is where the connective tissue of the shared Sticky Bar comes together. On the Sticky Bar dashboard, you can enter any URLs on your domain that you want the bar to appear on. You can enter them one-by-one if you like, or to make it much faster, just use the same naming convention (unique to this microsite/campaign) on each of the microsite page URLS.

I used these URLs for my pages:

unbounce.com/pam-micro-home/
unbounce.com/pam-micro-about/
unbounce.com/pam-micro-features/
unbounce.com/pam-micro-faq/
unbounce.com/pam-micro-signup/

For the URL Targeting, I simply set one rule, that URLs need to contain “pmm-micro”.
For the Trigger, I selected “When a visitor arrives on the page.”
for the frequency, I selected “Show on every visit.” because the nav needs to be there always.

Step 5: Add the Unbounce Script

We have a one-line Javascript that needs to be added to your website to make the Sticky Bars work. If you use Google Tag Manager on your site, then it’s super easy, just give the code snippet to your dev to paste into GTM.

Note: As this microsite solution was 100% within Unbounce (Landing Pages and Sticky Bar), you don’t actually have to add the script to your website, you can just add it to the each of the landing pages individually. But it’s best to get it set up on your website, which will show it on your Unbounce landing pages on that domain, by default.

Click Publish on #AllTheThings!

And that’s that!


You can see the final microsite here.
(Desktop only right now I’m afraid. I’ll set up mobile responsive soon but it’s 2am and this blogging schedule is killing me :D).


I’ve also written a little script that uses cookies to change the visual state of each navigation button to show which pages you’ve visited. I’ll be sharing that in the future for another concept to illustrate how you can craft a progress bar style navigation flow to direct people where you want them to go next!

A Few Wee Caveats

  • This use of a Sticky Bar isn’t a native feature of Unbounce at this point, it’s just a cool thing you can do. As such, it’s not technically supported, although our community loves this type of thing.
  • As it’s using a shared Sticky Bar for the nav, you’ll see it re-appear on each new page load. Not perfect, but it’s not a big deal and the tradeoff is worth it if the other benefits mentioned earlier work for you.
  • The close button on the Sticky Bar needs to be hidden (I need to bug a developer for some help and will add it back in here).

Aall in all, this type of MacGyvering is great for generating new ways of thinking about your marketing experiences, and how you can guide people to a conversion.

I’ve found that thinking about a microsite from a conversion standpoint is a fantastic mental exercise.

Have fun making a microsite, and never stop experimenting – and MacGyvering!
Cheers
Oli

p.s. Don’t forget to subscribe to the weekly updates for the rest of Product Awareness Month.

Visit site – 

How to Turn a Long Landing Page Into a Microsite – In 5 Easy Steps

How to Turn a Long Landing Page Into a Microsite – In 6 Easy Steps

Landing pages can get really long, which is totally fine, especially if you use a sticky anchor navigation to scroll people up and down to different page sections. It’s a great conversion experience and should be embraced.

However, there are times when having a small multi-page site, known as a microsite (or mini-site) can offer significant advantages.

This is not a conversation about your website (which is purely for organic traffic), I’m still talking about creating dedicated marketing-campaign-specific experiences. That’s what landing pages were designed for, and a microsite is very similar. It’s like a landing page in that it’s a standalone, controlled experience, but with a different architecture.

The sketch below shows the difference between a landing page and a microsite.

The landing page is a single page with six sections. The microsite has a homepage and 5 or 6 child pages, each with a persistent global navigation to conect the pages.

They are both “landing experiences”, just architected differently. I’ve noticed that many higher education landing experiences are four-page microsites. The pharmaceutical industry tends to create microsites for every new product campaign – especially those driven by TV ads.

What are the benefits of a microsite over a long landing page?

To reiterate, for most marketing campaign use cases, a single landing page – long or short – is your absolute best option. But there are some scenarios where you can really benefit from a microsite.

Some of the benefits of a microsite include:

  1. It allows more pages to be indexed by Google
  2. You can craft a controlled experience on each page (vs. a section where people can move up and down to other sections)
  3. You can add a lot more content to a certain page, without making your landing page a giant.
  4. You can get more advanced with your analytics research as there are many different click-pathways within a microsite that aren’t possible to track or design for on a single page.
  5. The technique I’m going to show you takes an Unbounce landing page, turns it into a 5-page microsite.

How to Create a Microsite from a Long Landing Page

The connective tissue of a microsite is the navigation. It links the pages together and defines the available options for a visitor. I’ll be using an Unbounce Sticky Bar as the shared global navigation to connect five Unbounce landing pages that we’ll create from the single long landing page. It’s really easy.

First, Choose a Landing Page to Work With

I’ve created a dummy landing page to work with. You can see from the zoomed-out thumbnail on the right-hand side how long it is: 10 page-sections long to be specific. (Click the image to view the whole page in a scrolling lightbox.)

The five-step process is then as follows:

I’ll explain it in more detail with screenshots and a quick video.

  1. Create the microsite pages, by duplicate your landing page 5 times
  2. Delete the page sections you don’t want on each microsite page
  3. Create a Sticky Bar and add five navigation buttons
  4. Set the URL targeting of the Sticky Bar to appear on the microsite pages
  5. Add the Unbounce global script to your site
  6. Click “Publish” << hardly a step.

Step 1: Create Your Microsite Pages

Choose “Duplicate Page” from the cog menu on your original landing page to create a new page (5 times). Then name each page and set the URL of each accordingly. In the screenshot below you can see I have the original landing page, and five microsite pages Home|About|Features|FAQ|Sign Up.

Step 2: Delete Page Sections on Each Microsite Page

Open each page in the Unbounce builder and click the background of any page section you don’t want and hit delete. It’s really quick. Do this for each page until they only have the content you want to be left in them. Watch the 30 sec video below to see how.

Pro Tip: Copy/Paste Between Pages

There is another way to do it. Instead of deleting sections, you can start with blank pages for the microsite, and copy/paste the sections you want from the landing page into the blank pages. This is one of the least-known and most powerful features of Unbounce.

The best way is to have a few browser tabs open at once (like one for each page), then just copy and paste between browser tabs. It’s epic! Watch…

Step 3: Create the Navigation With a Sticky Bar

Create a new Sticky Bar in Unbounce (it’s the same builder for landing pages and popups). Add buttons or links for each of your microsite pages, and set the “Target” of the link to be “Parent Frame” as shown in the lower-right of this screenshot.

Step 4: Set URL Targeting

This is where the connective tissue of the shared Sticky Bar comes together. On the Sticky Bar dashboard, you can enter any URLs on your domain that you want the bar to appear on. You can enter them one-by-one if you like, or to make it much faster, just use the same naming convention (unique to this microsite/campaign) on each of the microsite page URLS.

I used these URLs for my pages:

unbounce.com/pam-micro-home/
unbounce.com/pam-micro-about/
unbounce.com/pam-micro-features/
unbounce.com/pam-micro-faq/
unbounce.com/pam-micro-signup/

For the URL Targeting, I simply set one rule, that URLs need to contain “pmm-micro”.
For the Trigger, I selected “When a visitor arrives on the page.”
for the frequency, I selected “Show on every visit.” because the nav needs to be there always.

Step 5: Add the Unbounce Script

We have a one-line Javascript that needs to be added to your website to make the Sticky Bars work. If you use Google Tag Manager on your site, then it’s super easy, just give the code snippet to your dev to paste into GTM.

Note: As this microsite solution was 100% within Unbounce (Landing Pages and Sticky Bar), you don’t actually have to add the script to your website, you can just add it to the each of the landing pages individually. But it’s best to get it set up on your website, which will show it on your Unbounce landing pages on that domain, by default.

Ste 6: Hide the Sticky Bar Close Button

As this is a navigation bar, and not a promo, we need to make sure it’s always there and can’t be hidden. It’s not a native feature in the app right now, so you need to add this CSS to each of the microsite pages.

 .ub-emb-iframe-wrapper .ub-emb-close 
  visibility: hidden;
 

Click Publish on #AllTheThings!

And that’s that!


You can see the final microsite here.
(Desktop only right now I’m afraid. I’ll set up mobile responsive soon but it’s 2am and this blogging schedule is killing me :D).


I’ve also written a little script that uses cookies to change the visual state of each navigation button to show which pages you’ve visited. I’ll be sharing that in the future for another concept to illustrate how you can craft a progress bar style navigation flow to direct people where you want them to go next!

A Few Wee Caveats

  • This use of a Sticky Bar isn’t a native feature of Unbounce at this point, it’s just a cool thing you can do. As such, it’s not technically supported, although our community loves this type of thing.
  • As it’s using a shared Sticky Bar for the nav, you’ll see it re-appear on each new page load. Not perfect, but it’s not a big deal and the tradeoff is worth it if the other benefits mentioned earlier work for you.

Aall in all, this type of MacGyvering is great for generating new ways of thinking about your marketing experiences, and how you can guide people to a conversion.

I’ve found that thinking about a microsite from a conversion standpoint is a fantastic mental exercise.

Have fun making a microsite, and never stop experimenting – and MacGyvering!
Cheers
Oli

p.s. Don’t forget to subscribe to the weekly updates for the rest of Product Awareness Month.

From – 

How to Turn a Long Landing Page Into a Microsite – In 6 Easy Steps

Technology isn’t the Problem, We Are: 5 Horrific Website Popup Examples

It’s Day 5 of Product Marketing Month. Today I get to bash some really bad popup examples. Yuss! — Unbounce co-founder Oli Gardner

But before I bring the heat, I want to talk a bit about what it’s like, as a product marketer, to be marketing something that’s difficult to market.

You see, there’s a common problem that many marketers face, and it’s also one of the most asked questions I hear when I’m on the road, as a speaker:

“How do I great marketing for a boring product or service?”

That’s a tough challenge for sure, although the good news is that if you can inject some originality you’ll be a clear winner, as all of your competitors are also boring. However, I think I can one-up that problem:

“How do I do great marketing for something that’s universally hated, like popups?”

We knew we had a big challenge ahead of us when we decided to release the popups product because of the long legacy of manipulative abuse it carries with it.

In fact, as the discussion about product direction began in the office, there were some visceral (negative) reactions from some folks on the engineering team. They feared that we were switching over to the dark side.

It makes sense to me that this sentiment would come from developers. In my experience, really good software developers have one thing in common. They want to make a difference in the world. Developers are makers by design, and part of building something is wanting it to have a positive impact on those who use it.

To quell those types of fears requires a few things;

  • Education about the positive use cases for the technology,
  • Evidence in the form of good popup examples, showcasing how to use them in a delightful and responsible manner,
  • Features such as advanced triggers & targeting to empower marketers to deliver greater relevance to visitors,
  • And most important of all – it requires us to take a stance. We can’t change the past unless we lead by example.

It’s been my goal since we started down this path, to make it clear that we are drawing a line in the sand between the negative past, and a positive future.

Which is why we initially launched with the name “Overlays” instead of popups.

Overlays vs. Popups – The End of an Era

It made a lot of sense at the time, from a branding perspective. Through podcast interviews and public speaking gigs, I was trying to change the narrative around popups. Whenever I was talking about a bad experience, I would call it a popup. When it was a positive (and additive) experience, I’d call it an overlay. It was a really good way to create a clear separation.

I even started to notice more and more people calling them overlays. Progress.

Unfortunately, it would still require a lot of continued education to make a dent in the global perception of the terminology, that with the search volume for “overlays” being tiny compared to popups, factored heavily into our decision to pivot back to calling a popup a popup.

Positioning is part of a product marketer’s job – our VP of Product Marketing, Ryan Engley recently completed our most recent positioning document for the new products. Just as the umbrella term “Convertables” we had been using to include popups and sticky bars had created confusion, “Overlays” was again making the job harder than it should have been. You can tell, just from reading this paragraph alone that it’s a complex problem, and we’re moving in the right direction by re-simplifying.

The biggest challenge developing our positioning was the number of important strategic questions that we needed to answer first. The market problems we solve, for who, how our product fits today with our vision for the future, who we see ourselves competing with, whether we position ourselves as a comprehensive platform that solves a unique problem, or whether we go to market with individual products and tools etc. It’s a beast of an undertaking.

My biggest lightbulb moment was working with April Dunford who pushed me to get away from competing tool-to-tool with other products. She said in order to win that way, you’d have to be market leading in every tool, and that won’t happen. So what’s the unique value that only you offer and why is it important?

— Ryan Engley, VP Product Marketing at Unbounce

You can read more about our initial product adoption woes, and how our naming conventions hurt us, in the first post in the series – Product Marketing Month: Why I’m Writing 30 Blog Posts in 30 Days.

Let’s get back to the subject of popups. I think it’s important to look back at the history of this device to better understand how they came about, and why they have always caused such a stir.

Browser Interaction Models & the History of the Popup

The talk I was doing much of last year was called Data-Driven Design. As part of the talk, I get into interaction design trends. I’ve included the “Trendline” slide below.

You can see that the first occurrence of a popup was back in 1998. Also, note that I included Overlays in late 2016 when we first started that discussion.

Like many bad trends, popups began as web developers started trying to hack browser behavior to create different interruptive interaction modes. I know I made a lot of them back in the day, but I was always doing it to try to create a cool experience. For example, I was building a company Intranet and wanted to open up content in a new window, resize it, and stick it to the side of the screen as a sidebar navigation for the main window. That was all good stuff.

Tabbed browsers have done a lot to help clean up the mess of multiple windows, and if you couple that with popup blockers, there’s a clear evolution in how this type of behavior is being dealt with.

Then came the pop-under, often connected to Malware virus schemes where malicious scripts could be running in the background and you wouldn’t even know.

And then the always fun “Are you sure you want to do that?” Inception-like looping exit dialogs.

Developers/hackers took the simple Javascript modal “Ok” “Cancel” and abused it to the point where there was no real way out of the page. If you tried to leave the page one modal would lead to another, and another, and you couldn’t actually close the browser window/tab unless you could do it within the split second between one dialog closing and the next opening. It was awful.

So we have a legacy of abuse that’s killed the perception of popups.

What if Popups Had Been Built Into Browsers?

Imagine for a moment that a popup was simply one of many available interaction models available in the browsing experience. They could have had a specification from the W3C, with a set of acceptable criteria for display modes. It would be an entirely different experience. Sure, there would still be abuse, but it’s an interesting thought.

This is why it’s important that we (Unbounce and other like-minded marketers and Martech software providers) take a stance, and build the right functionality into this type of tool so that it can be used responsibly.

Furthermore, we need to keep the dialog going, to educate the current and future generations of marketers that to be original, be delightful, be a business that represents themselves as professionals, means taking responsibility for our actions and doing everything we can to take the high road in our marketing.

Alright, before I get to the really bad website popup examples, I’ll leave you with this thought:

Technology is NOT the problem, We Are.

It’s the disrespectful and irresponsible marketers who use manipulative pop-psychology tactics for the sake of a few more leads, who are the problem. We need to stop blaming popups for bad experiences, and instead, call out the malicious marketers who are ruining it for those trying to do good work.

It’s a tough challenge to reverse years of negative perception, but that’s okay. It’s okay because we know the value the product brings to our customers, how much extra success they’re having, and because we’ve built a solution that can be configured in precise ways that make it simple to use in a responsible manner (if you’re a good person).


Follow our Product Marketing Month journey >> click here to launch a popup with a subscribe form (it uses our on-click trigger feature).


5 Really Bad Website Popup Examples

What does a bad popup actually look like? Well, it depends on your judging criteria, and for the examples below, I was considering these seven things, among others:

  1. Clarity: Is it easy to figure out the offer really quickly?
  2. Relevance: Is it related to the content of the current page?
  3. Manipulation: Does it use psychological trickery in the copy?
  4. Design: Is it butt ugly?
  5. Control: Is it clear what all options will do?
  6. Escape: Can you get rid of it easily?
  7. Value: Is the reward worth more than the perceived (or actual) effort?

#1 – Mashable Shmashable

What’s so bad about it?

If you peer into the background behind the popup, you’ll see a news story headline that begins with “Nightmare Alert”. I think that’s a pretty accurate description of what’s happening here.

  • Design: Bad. The first thing I saw looks like a big mistake. The Green line with the button hanging off the bottom looks like the designer fell asleep with their head on the mouse.
  • Clarity: Bad. And what on earth does the headline mean? click.click.click. Upon deeper exploration, it’s the name of the newsletter, but that’s not apparent at all on first load.
  • Clarity: worse. Then we get the classic “Clear vs. Clever” headline treatment. Why are you talking about the pronunciation of the word “Gif”? Tell me what this is, and why I should care to give you my email.
  • Design: Bad. Also, that background is gnarly.

#2 – KAM Motorsports Revolution!

What’s so bad about it?

It’s motorsports. It’s not a revolution. Unless they’re talking about wheels going round in circles.

  • Clarity: Bad. The headline doesn’t say what it is, or what I’ll get by subscribing. I have to read the fine print to figure that out.
  • Copy: Bad. Just reading the phrase “abuse your email” is a big turn off. Just like the word spam, I wasn’t thinking that you were going to abuse me, but now it’s on my mind.
  • Relevance: Bad. Newsletter subscription popups are great, they have a strong sense of utility and can give people exactly what they want. But I don’t like them as entry popups. They’re much better when they use an exit trigger, or a scroll trigger. Using a “Scroll Up” trigger is smart because it means they’ve read some of your content, and they are scrolling back up vs. leaving directly, which is another micro-signal that they are interested.

#3 – Utterly Confused


(Source unknown – I found it on confirmshaming.tumblr.com)

What’s so bad about it?

I have no earthly clue what’s going on here.

  • Clarity: Bad. I had to re-read it five times before I figured out what was going on.
  • Control: Bad. After reading it, I didn’t know whether I would be agreeing with what they’re going to give me, or with the statement. It’s like an affirmation or something. But I have no way of knowing what will happen if I click either button. My best guess after spending this much time writing about it is that it’s a poll. But a really meaningless one if it is. Click here to find out how many people agreed with “doing better”…
  • It ends with “Do Better”. I agree. They need to do a lot better.

#4 – Purple Nurple

What’s so bad about it?

  • Manipulation: Bad. Our first “Confirm Shaming” example. Otherwise known as “Good Cop / Bad Cop”. Forcing people to click a button that says “Detest” on it is so incongruent with the concept of a mattress company that I think they’re just being cheap. There’s no need to speak to people that way.
  • I found a second popup example by Purple (below), and have to give them credit. The copy on this one is significantly more persuasive. Get this. If you look at the section I circled (in purple), it says that if you subscribe, they’ll keep you up to date with SHIPPING TIMES!!! Seriously? If you’re going to email me and say “Hey Oli, great news! We can ship you a mattress in 2 weeks!”, I’ll go to Leesa, or Endy, or one of a million other Casper copycats.


#5 – Hello BC

What’s so bad about it?

Context: This is an entry popup, and I have never been to this site before.

  • Relevance: Bad. The site is Hellobc.com, the title says “Supernatural British Columbia”, and the content on the page is about skydiving. So what list is this for? And nobody wants to be on a “list”, stop saying “list”. It’s like saying email blast. Blast your list. If you read the first sentence it gets even more confusing, as you’ll be receiving updates from Destination BC. That’s 4 different concepts at play here.
  • Design: Bad. It’s legitimately butt ugly. I mean, come on. This is for Beautiful Supernatural British Columbia ffs. It’s stunning here. Show some scenery to entice me in.
  • Value: Bad. Seeing that form when I arrive on the page is like a giant eff you. Why do they think it’s okay to ask for that much info, with that much text.
  • Control: Bad. And there’s not any error handling. However, the submit button remains inactive until you magically click the right amount of options to trigger it’s hungry hungry hippo mouth to open.

Trainwreck.


Well, that’s all for today, folks. You might be wondering why there were so few popup examples in this post, keep reading and I’ll explain why.

Coming Up Tomorrow – Good Popups, YAY!!!

One of the most interesting things I’ve noticed of late is that there is a shift in quality happening in the popup world. When the team rallied to find the bad popup examples above, we found at least 10x as many good ones as bad. That’s something to feel pretty good about. Perhaps the positive energy we’re helping to spread is having an impact.

So get your butt back here tomorrow to see 20+ delightful website popup examples. More importantly, I’ll also be sharing “The Delight Equation”, my latest formula for measuring quantifying how good your popups really are.

See you then!

Cheers
Oli

p.s. Don’t forget to subscribe to the weekly updates.

Excerpt from: 

Technology isn’t the Problem, We Are: 5 Horrific Website Popup Examples

How to Build a Sticky SaaS Product

It’s Day 4 of Product Marketing Month. Today’s post examines churn, and accelerating your customer’s Time to Value. — Unbounce co-founder Oli Gardner

What’s The Easiest Way to Make a Sticky Product?

Build a product and call it a Sticky Bar. Look over there >>

With that dad joke out of the way, I want to talk about some important SaaS metrics, that will help product marketers understand what is or isn’t sticky about the products you’re building. I’m also going to talk about how to find some clues in your data, and finally how we’ve been working on fixing our own stickiness problems at Unbounce.

What makes a SaaS product sticky?

In short, it’s a product that becomes part of someone’s daily routine. Something they can’t do as good or enjoyable a job without. It’s something they need. But even more than that, it’s something they love. And that love comes from two main places; either it’s a one-of-a-kind solution that’s simply awesome, or it’s a many-of-a-kind solution that includes a superior customer experience.

Unless you’re selling frying pans, having a product that isn’t sticky just means that you have a churn problem.

Why do people churn from a SaaS product?

There are many reasons why someone would decide to stop paying you money every month, and a great way to find out what they are is simply to ask. At Unbounce we have an in-app cancellation survey that gets seen by anyone who downgrades to free from a paid plan. It’s a single open-ended question: “What is your reason for downgrading?

On average, about 27% of people answer the question. Here’s a pie chart showing the breakdown of responses over a 6-month period.

There are some fascinating responses in there, but the one that stands out the most is the “Under-Utilized” (31.40%) category. This is the response you don’t want to hear. They were using it, and probably deriving value from it, but they weren’t using it enough to warrant an ongoing subscription.

We noticed that certain cohorts of customers were more likely to see fluctuating value from using landing pages. This could be due to an irregular number or timing of campaigns, so when there’s nothing going on, they cancel or downgrade their plan. They may come back again, many do, but the inconsistent behavior – known as flapping – can be a symptom of a product that isn’t fully sticky.

It could also be because they were not able to effectively calculate or realize the ROI of our solution properly, which can happen when customers are running campaigns not directly tied to paid spend. When there is money involved, it’s easier to calculate your returns, speeding up the time to value (TTV).

Increase Stickiness by Reducing Time to Value (TTV)

Time to Value (TTV) is the velocity of a customer seeing the value from your product – how long does it take them to get to those ah-ha moments that make them see the benefit?

It’s a great way to think from the customer’s perspective. The tough part is discovering what those ah-ha moments are. But when you do, product marketing has its mission.

When your TTV goes down, your stickiness and your LTV (Lifetime Value) goes up.

One of the ah-ha moments we’ve identified for Unbounce is when people get inside the builder, and they see how easy, yet how powerful and customizable it is. The problem is, they don’t get to see that until they slap down a credit card.

To amplify this, we just released a preview mode of our builder, designed specifically to help shorten the TTV.

You can check it out at preview.unbounce.com.

If you click the image, you’ll notice that upon arrival, you see a popup with a welcome message and a bit of contextual information about where you are and what you can do.

Speaking of popups, I’ve heard this question a lot in the past 6 months:

“Why did you choose popups and sticky bars as your next products?”

Why did we choose popups and sticky bars?

With the landing page product, we knew we’d solved the pain of customers increasing conversions for their paid traffic, but those same customers (and their teams) also need ways to optimize the organic traffic they already have.

Popups and sticky bars are both tools that have a short time to value, for three reasons:

  1. They are less complex than a landing page (to build and to understand)
  2. Most businesses get more traffic to their website than they do to their landing pages, so they’ll see conversions coming in more quickly
  3. After installing a single line of Javascript, you can place them anywhere on your site without technical help.

If you remember back to the chart at the beginning, 4.45% of canceling customers said they had “poor campaign performance”. This is important to note because getting something up and running quickly and easily is one thing, but being successful – especially as a marketer – is another thing altogether.

Which is why our job as product marketers doesn’t stop until the customer is being successful using your software.

The other way to increase product stickiness

I mentioned earlier about needing to provide a superior customer experience, especially in a crowded competitive landscape. You need to have a company people trust more than the competition, and trust comes from transparency, security, reliability, and just giving a human shit about your customers.

I love how Andy Raskin puts it in his recent post about “The greatest sales pitch I’ve seen all year, it’s Drift’s and it’s brilliant”:

Product differentiation, by itself, has become indefensible because today’s competitors can copy your better, faster, cheaper features virtually instantly. Now, the only thing they can’t replicate is the trust that customers feel for you and your team. Ultimately, that’s born not of a self-centered mission statement like “We want to be best-in-class X” or “to disrupt Y,” but of a culture whose beating heart is a strategic story that casts your customer as the world-changing hero.

How are you reducing your time to value (TTV)?

I’d love to know the techniques you’re using to discover your ah-ha moments, and what you’re doing to accelerate your customers’ access to them. Lemme know in the comments.

Cheers,
Oli Gardner

Continued:  

How to Build a Sticky SaaS Product