Tag Archives: code

Thumbnail

Visual Studio Live Share Can Do That?




Visual Studio Live Share Can Do That?

Burke Holland



A few months ago, Microsoft released its free Visual Studio (VS) Live Share service. VS Live Share is Google Docs level collaboration for code. Multiple developers can collaborate on the same file at the same time without ever leaving their own editor.

After the release of Live Share, I realized that many of us have resigned ourselves to being isolated in our code and we’re not even aware that there are better ways to work with a service like VS Live Share. This is partly because we are stuck in old habits and partly because we just aren’t aware of what all VS Live Share can do. That last part I can help with!

In this article, we’ll go over the features and best practices for VS Live Share that make developer collaboration as easy as being an “Anonymous Hippo.”


list of anonymous animals in Google Docs


Google Docs has an interesting way of handling anonymous participants (Large preview)

Share Your Code

Live Share comes as an extension for both Visual Studio and Visual Studio Code (VS Code). In this article, we’re going to focus on VS Code.


vs code live share extension readme page


(Large preview)

You can also install it via the VS Live Share Extension Pack, which includes the following extensions, all of which we are going to cover in this article…

  • VS Live Share
  • VS Live Share Audio
  • Slack Chat extension

Once the extension is installed, you will need to log in to the VS Live Share service. You can do that by opening the Command Palette Ctrl/Cmd + Shift + P and select “Sign In With Browser”. If you don’t log in and you try and start a new sharing session, you will be prompted to log in at that time.


vs code command palette showing option to sign in with browser


Use the VS Code Command Palette to start a new Live Share session (Large preview)

There are several ways to kick off a VS Live Share session. You can do it from the Command Palette, you can click that “Share” button in the bottom toolbar, or you can use the VS Live Share explorer view in the Sidebar.


vs code with boxes drawn around the different parts of the UI that can be used to start a live share session


There are a myriad of ways to start a new VS Live Share session (Large preview)

A link is copied to your clipboard. You can then send that link to others, and they can join your Live Share session — provided they are using VS Code as well. Which, aren’t we all?

Now you can collaborate just like you were working on a regular old Word document:

The other person can not only see your code, but they can edit it, save it, execute it and even debug it. For you, they show up as a cursor with a name on it. You show up in their editor the same way.

The VS Live Share Explorer

The VS Live Share explorer shows up as a new icon in the Action Bar — which is that bar of icons on the far right of my screen (the far left of yours for default Action Bar placement). This is a sort of “ground zero” for everything VS Live Share. From here, you can start sessions, end them, share terminals, servers, and see who is connected.


vs live share viewlet


The VS Live Share Explorer is a heads-up view of all things Live Share (Large preview)

It’s a good idea to bind a keyboard shortcut to this VS Live Share Explorer view so that you can quickly toggle between that and your files. You can do this by pressing Ctrl/Cmd + K (or Ctrl/Cmd + S) and then searching for “Show Live Share”. I bound mine to Ctrl/Cmd + L, which doesn’t seem to be bound to anything else. I find this shortcut to be intuitive (L for Live Share) and easy to hit on the keyboard.


the keyboard binding screen in vs code with a binding created for the vs live share viewlet


You can create a binding for the VS Live Share Explorer viewlet (Large preview)

Share Code Read-Only

When you start a new sharing session, you will be notified thusly and asked if you would like to share your workspace read-only. If you select read-only, people will be able to see your code and follow your movements, but they will not be able to interact.


vs code notification prompting user to choose read-only sharing


Sharing sessions are read-write by default, but you can make them read-only (Large preview)

This mode is useful when you are sharing with someone that you don’t necessarily trust — maybe a vendor, partner or an estranged ex.

It’s also particularly useful for instructors. Note that at the time of this writing, VS Live Share is locked to 5 concurrent users. Since you probably are going to want more than that in read-only mode, especially if you’re teaching a group, you can up the limit to 30 by adding the following line to your User Settings file: Ctrl/Cmd + ,.

"liveshare.features": "experimental"

Change The Default Join Behavior

Anyone with the link can join your Live Share session. When they join, you’ll see a pop-up letting you know. Likewise, when they disconnect, you get notified. This is the default behavior for VS Live Share.


vs code notification with the name of the person who has joined the live share session


VS Code will alert you whenever someone joins your session (Large preview)

It’s a good idea to change this so that you have to manually approve someone before they can join your session. This is to protect you in the case where you go to lunch and forget to disconnect your session. Your co-workers can’t log back in, change one letter in your database connection string and then laugh while you spend the next four hours trying to figure out how your life has gone so horribly wrong.

To enable this, add the following line to your User Settings file Ctrl/Cmd + ,.

"liveshare.guestApprovalRequired": true

Now you’ll be prompted when someone wants to join. If you block someone, they are blocked for the duration of the session. If they try to join again, you won’t be notified and they will be unceremoniously rejected by VS Live Share.

Go and enjoy your lunch. Your computer is safe.

Focus Followers

By default, anyone who joins your Live Share session is “following” you. That means that their editor will load up whatever file you are in and scroll whenever you scroll. Even if you switch files, participants will see exactly what you see.

The second that a person makes changes to a file, they are no longer following you. So if you are both working on a file together, and then you go to a different file, they won’t automatically go with you. That can lead to a lot of confusion with you talking about code in the file you’re in while the other person is looking at something entirely different.

Besides just telling each other where you are (which works, btw), there is a handy command called “Focus Participants” that is in the Command Palette Ctrl/Cmd + Shift + P.


vs code command palette showing live share focus command


Access the “focus” command from the VS Code Command Palette (Large preview)

You can also access it as an icon in the VS Live Share Explorer view.


vs code live share explorer focus icon


Send a follow request by clicking the follow icon in the VS Live Share Explorer viewlet (Large preview)

This will focus your participants on the next thing you click on or scroll to. By default, VS Live Share focus requests are accepted implicitly. If you don’t want people to be able to focus you, you can add the following line to your User Settings file.

"liveshare.focusBehavior": "prompt"

Also note that you can follow participants. If you click on their name in the VS Live Share Explorer view, you will begin to follow them.

Because following is turned off as soon as the other person begins editing code, it can be tough to know exactly when people are following you and when they aren’t. One place you can look is in the VS Live Share Explorer view. It will tell you the file that a person is in, but not whether or not they are following you.

A good practice is to just remember that focus is always changing so people may or may not see what you see at any given time.

Debug As A Team

Participants can share any debug sessions that you run. If you start a debug session, they will get the exact same experience that you do. If it breaks on your side, it breaks on theirs, and they get the full debug view into all of your code.

They can step in, out, over, add watches, evaluate in the Debug Console; any debugging that you can do, they can do too, and they can control it.

Debugging can also be launched by participants. Be default, though, VS Code does not allow your debugger to be started remotely. To enable this, add the following line to your User Settings file Ctrl/Cmd + ,:

"liveshare.allowGuestDebugControl": true

Share Your Terminal

A lot of the work we do as developers isn’t in our code; it’s in the terminal. Some days it seems like I spend about as much time on my terminal as I do in my editor. This means that if you have an error on your terminal or need to type some command, it would be nice if your participants in VS Live Share can see your terminal in addition to your code.

VS Code has an integrated terminal, and you can share it with VS Live Share.


vs code command palette with share terminal selected


Access the “Share Terminal” command from the VS Code Command Palette (Large preview)

When you do this, you have the opportunity to share your terminal as read-only, or as read-write.


vs code prompting to share terminal as read-only or read-write


Always share your terminal read-only unless you absolutely have to share it with write access (Large preview)

By default, you should be sharing your terminal as read-only. When you share your terminal read-write, the user can execute arbitrary commands directly on your terminal. Let that sink in for a moment. That’s heavy.

It goes without saying that having remote write access to someone’s terminal comes with a lot of trust and responsibility. You should only ever share your terminal read-write with people that you trust implicitly. Estranged ex’s are probably off the table.

Sharing your terminal read-only safely allows the person on the other end of the line to see what you are typing and your terminal output in real time, but restricts them from typing anything into that terminal.

Should you find yourself in a scenario where it would be quicker for the other person to just get at your terminal instead of trying to walk you through some wacky command with a ton of flags, you can share your terminal read-write. In this mode, the other person has full remote access to your terminal. Choose your friends wisely.

Share Your localhost

In the video above, the terminal command ends with a link to a site running on http://localhost:8080. With VS Live Share, you can share that localhost so that the other person can access it just like it was their own localhost.

If you are running a shared debug session, when the participant hits that localhost URL on their end, it will break for both of you if a breakpoint is hit. Even better, you can share any TCP process. That means that you can share something like a database or a Redis cache. For instance, you could share your local Mongo DB server. Seriously! This means no more changing config files or trying to get a shared database up. Just share the port for your local Mongo DB instance.

Share The Right Files The Right Way

Sometimes you don’t want collaborators to see certain files. There are likely private keys and passwords in your project that are not checked into source control and not suitable for public viewing. In this case, you would want to hide those files from anyone participating in your Live Share session.

By default, VS Live Share will hide any file that is specified in your .gitignore. If there is a file that you want to hide, just add it to your .gitignore. Note though, that this only hides the file in the project view. If you are in a shared debugging session and you step into a file that is in the .gitignore, it is still loaded up in the editor and your collaborators will be able to see it.

You can get more fine-grained control over how you share files by creating a .vsls.json file.

For instance, if you wanted to make sure that any files that are in the .gitignore are never visible, even during debugging, you can set the gitignore property to exclude.


    "$schema": "http://json.schemastore.org/vsls",
    "gitignore":"exclude"

Likewise, you could show everything in your .gitignore and control file visibilty directly from the .vsls.json file. To do that, set the gitignore to none and then use the excludeFiles and hideFiles properties. Remember — exclude means never visible, and hide means “not visible in the file explorer.”


    "$schema": "http://json.schemastore.org/vsls",
    "gitignore":"none",
    "excludeFiles":[
        "*.env"
    ],
    "hideFiles": [
        "dist"
    ]

Sharing And Extensions

Part of the appeal of VS Code to a lot of developers is the massive extensions marketplace. Most people will have more than a few installed. It’s important to understand how extensions will work, or not work, in the context of VS Live Share.

VS Live Share will synchronize anything that is specific to the context of the project you are sharing. For instance, if you have the Vetur extension installed because you are working with a Vue project, it will be shared across to any participants — regardless of whether or not they have it installed as well. The same is true for other context-specific things, like linters, formatters, debuggers, and language services.

VS Live Share does not synchronize extensions that are user specific. These would be things like themes, icons, keyboard bindings, and so on. As a general rule of thumb, VS Live Share shares your context, not your screen. You can consult the official docs article on this subject for a more in-depth explanation of what extensions you can expect to be shared.

Communicate While You Collaborate

One of the first things people do on their inaugural VS Live Share experience is to try to communicate by typing in code comments. This seems like the write (get it?) thing to do, but not really how VS Live Share was designed to be used.

VS Live Share is not meant to replace your chat client of choice. You likely already have a preferred chat mechanism, and VS Live Share assumes that you will continue to use that.

If you’re already using Slack, there is a VS Code extension called Slack Chat. This extension is still a tad early in its development, but it looks quite promising. It puts VS Code in split mode and embeds Slack on the right-hand side. Even better, you can start a Live Share session directly from the Slack chat.


vs code slack chat extension


The Slack Chat extension puts Slack inside of your editor (Large preview)

Another tool that looks quite interesting is called CodeStream.

CodeStream

While VS Live Share looks to improve collaboration from the editor, CodeStream is aiming to solve that same problem from a chat perspective.

The CodeStream extension allows you to chat directly within VS Code and those chats become part of your code history. You can highlight a chunk of code to discuss and it goes directly into the chat so there is context for your comments. These comments are then saved as part of your Git repo. They also show up in your code as little comment icons, and these comments will show up no matter which branch you are on.

When it comes to VS Live Share, CodeStream offers a complimentary set of features. You can start new sessions directly from the chat pane, as well as by clicking on an avatar. New sessions automatically create a corresponding chat channel that you can persist with the code, or dispose of when you are done.

If chatting isn’t enough to get the job done, and you need to collaborate like it’s 1999, help is just a phone call away.

VS Live Share Audio

While VS Live Share isn’t trying to reinvent chat, it does re-invent your telephone. Kind of.

With the VS Live Share Audio extension, you can call someone directly and do voice chat from within VS Code.


vs code command palette showing start audio call option


Make audio calls from VS Code using the VS Live Share Audio extension (Large preview)

The other person will then get a prompt to join your call.


vs code notification asking if you would like to join the audio call


VS Code will ask you if you want to join an audio call that is in process (Large preview)

You will see a speaker icon in the bottom status bar when you are connected to a call. You can click on that speaker to change your audio device, mute yourself, or disconnect from the call.


vs code options showing options like mute and disconnect for live share audio extension


You have full control over audio settings when in a VS Live Share Audio call (Large preview)

The last tip I’ll give you is probably the most important, and it’s not a fancy feature or obscure setting you didn’t know existed.

Change Your Muscle Memory

We’ve got years of learned behavior when it comes to getting help or sharing our code. The state of developer collaboration tools has been so bad for so long that we are conditioned to paste code into Slack, start an awkward Skype calls that consist mostly of “tell me when you can see my screen”, or crowd around a monitor and point excessively, i.e. stock photo style.


a group of people pointing at a computer screen


(Large preview)

The most important thing you can do to get the most out of VS Live Share is to actually use VS Live Share. And it will have to be a “conscious” effort.

Your brain is good at patterns. You are constantly recognizing and classifying the world around you based on patterns you have identified, and you are so good at it, you don’t even realize you are doing it. You then develop default responses to these patterns. You form instincts. This is why you will default to the old ways of collaboration without even thinking about what you are doing. Before you know it you will be on a Skype call with someone sharing your screen — even if you have Live Share installed.

I’ve written a lot about VS Code and people will ask me from time to time how they can get more productive with their editor. I always say the same thing: the next time you reach for the mouse to do something, stop. Can you do that something with the keyboard instead? You probably can. Look up the shortcut and then make yourself use it. At first it’s going to be slower, but if you are willing to deliberately adopt a different behavior, you will be astonished at how fast your brain will default to the more productive way of doing something.

The same goes for Live Share. You will be on a call sharing your screen when it occurs to you that you could be using Live Share. At that moment, stop; click that “Share” button in the bottom of VS Code.

Yes, the person on the other end may not have the extension installed. Yes, it may take a moment to set it up. But if you work on establishing this behavior now, the next time you go to do this, it will “just work” and it won’t be long before you don’t even have to think about it, and at that point, you will finally have achieved that “Anonymous Hippo” level of collaboration.

More Resources

Smashing Editorial
(rb, ra, il)


See the article here:

Visual Studio Live Share Can Do That?

Thumbnail

Designing A Textbox, Unabridged




Designing A Textbox, Unabridged

Shane Hudson



Ever spent an hour (or even a day) working on something just to throw the whole lot away and redo it in five minutes? That isn’t just a beginner’s code mistake; it is a real-world situation that you can easily find yourself in especially if the problem you’re trying to solve isn’t well understood to begin with.

This is why I’m such a big proponent of upfront design, user research, and creating often multiple prototypes — also known as the old adage of “You don’t know what you don’t know.” At the same time, it is very easy to look at something someone else has made, which may have taken them quite a lot of time, and think it is extremely easy because you have the benefit of hindsight by seeing a finished product.

This idea that simple is easy was summed up nicely by Jen Simmons while speaking about CSS Grid and Piet Mondrian’s paintings:

“I feel like these paintings, you know, if you look at them with the sense of like ‘Why’s that important? I could have done that.’ It’s like, well yeah, you could paint that today because we’re so used to this kind of thinking, but would you have painted this when everything around you was Victorian — when everything around you was this other style?”

I feel this sums up the feeling I have about seeing websites and design systems that make complete sense; it’s almost as if the fact they make sense means they were easy to make. Of course, it is usually the opposite; writing the code is the simple bit, but it’s the thinking and process that goes into it that takes the most effort.

With that in mind, I’m going to explore building a text box, in an exaggeration of situations many of us often find ourselves in. Hopefully, by the end of this article, we can all feel more emphatic to how the journey from start to finish is rarely linear.

A Comprehensive Guide To User Testing

So you think you’ve designed something that’s perfect, but your test tells you otherwise. Let’s explore the importance of user testing. Read more →

Brief

We all know that careful planning and understanding of the user need is important to a successful project of any size. We also all know that all too often we feel to need to rush to quickly design and develop new features. That can often mean our common sense and best practices are forgotten as we slog away to quickly get onto the next task on the everlasting to-do list. Rinse and repeat.

Today our task is to build a text box. Simple enough, it needs to allow a user to type in some text. In fact, it is so simple that we leave the task to last because there is so much other important stuff to do. Then, just before we pack up to go home, we smirk and write:

<input type="text">

There we go!

Oh wait, we probably need to hook that up to send data to the backend when the form is submitted, like so:

<input type="text" name="our_textbox">

That’s better. Done. Time to go home.

How Do You Add A New Line?

The issue with using a simple text box is it is pretty useless if you want to type a lot of text. For a name or title it works fine, but quite often a user will type more text than you expect. Trust me when I say if you leave a textbox for long enough without strict validation, someone will paste the entire of War and Peace. In many cases, this can be prevented by having a maximum amount of characters.

In this situation though, we have found out that our laziness (or bad prioritization) of leaving it to the last minute meant we didn’t consider the real requirements. We just wanted to do another task on that everlasting to-do list and get home. This text box needs to be reusable; examples of its usage include as a content entry box, a Twitter-style note box, and a user feedback box. In all of those cases, the user is likely to type a lot of text, and a basic text box would just scroll sideways. Sometimes that may be okay, but generally, that’s an awful experience.

Thankfully for us, that simple mistake doesn’t take long to fix:

<textarea name="our_textbox"></textarea>

Now, let’s take a moment to consider that line. A <textarea>: as simple as it can get without removing the name. Isn’t it interesting, or is it just my pedantic mind that we need to use a completely different element to add a new line? It isn’t a type of input, or an attribute used to add multi-line to an input. Also, the <textarea> element is not self-closing but an input is? Strange.

This “moment to consider” sent me time traveling back to October 1993, trawling through the depths of the www-talk mailing list. There was clearly much discussion about the future of the web and what “HTML+” should contain. This was 1993 and they were discussing ideas such as <input type="range"> which wasn’t available until HTML5, and Jim Davis said:

“Well, it’s far-fetched I suppose, but you might use HTML forms as part of a game playing interface.”

This really does show that the web wasn’t just intended to be about documents as is widely believed. Marc Andreessen suggested to have <input type="textarea"> instead of allowing new lines in the single-line text type, [saying]: (http://1997.webhistory.org/www.lists/www-talk.1993q4/0200.html)

“Makes the browser code cleaner — they have to be handled differently internally.”

That’s a fair reason to have <textarea> separate to text, but that’s still not what we ended up with. So why is <textarea> its own element?

I didn’t find any decision in the mailing list archives, but by the following month, the HTML+ Discussion Document had the <textarea> element and a note saying:

“In the initial design for forms, multi-line text fields were supported by the INPUT element with TYPE=TEXT. Unfortunately, this causes problems for fields with long text values as SGML limits the length of attributea literals. The HTML+ DTD allows for up to 1024 characters (the SGML default is only 240 characters!)”

Ah, so that’s why the text goes within the element and cannot be self-closing; they were not able to use an attribute for long text. In 1994, the <textarea> element was included, along with many others from HTML+ such as <option> in the HTML 2 spec.

Okay, that’s enough. I could easily explore the archives further but back to the task.

Styling A <textarea>

So we’ve got a default <textarea>. If you rarely use them or haven’t seen the browser defaults in a long time, then you may be surprised. A <textarea> (made almost purely for multi-line text) looks very similar to a normal text input except most browser defaults style the border darker, the box slightly larger, and there are lines in the bottom right. Those lines are the resize handle; they aren’t actually part of the spec so browsers all handle (pun absolutely intended) it in their own way. That generally means that the resize handle cannot be restyled, though you can disable resizing by setting resize: none to the <textarea>. It is possible to create a custom handle or use browser specific pseudo elements such as ::-webkit-resizer.


The default <code>&lt;textarea&gt;</code> looks very small with a grey border and three lines as a resize handle.


A default textarea with no styling (Large preview)

It’s important to understand the defaults, especially because of the resizing ability. It’s a very unique behavior; the user is able to drag to change the size of the element by default. If you don’t override the minimum and maximum sizes then the size could be as small as 9px × 9px (when I checked Chrome) or as large as they have patience to drag it. That’s something that could cause mayhem with the rest of the site’s layout if it’s not considered. Imagine a grid where <textarea> is in one column and a blue box is in another; the size of the blue box is purely decided by the size of the <textarea>.

Other than that, we can approach styling a <textarea> much the same as any other input. Want to change the grey around the edge into thick green dashes? Sure here you go: border: 5px dashed green;. Want to restyle the focus in which a lot of browsers have a slightly blurred box shadow? Change the outline — responsibly though, you know, that’s important for accessibility. You can even add a background image to your <textarea> if that interests you (I can think of a few ideas that would have been popular when skeuomorphic design was more celebrated).

Scope Creep

We’ve all experienced scope creep in our work, whether it is a client that doesn’t think the final version matches their idea or you just try to squeeze in a tiny tweak and end up taking forever to finish it. So I ( enjoying creating the persona of an exaggerated project manager telling us what we need to build) have decided that our <textarea> just is not good enough. Yes, it is now multi-line, and that’s great, and yes it even ‘pops’ a bit more with its new styling. Yet, it just doesn’t fit the very vague user need that I’ve pretty much just thought of now after we thought we were almost done.

What happens if the user puts in thousands of words? Or drags the resize handle so far it breaks the layout? It needs to be reusable, as we have already mentioned, but in some of the situations (such as a ‘Twittereqsue’ note taking box), we will need a limit. So the next task is to add a character limit. The user needs to be able to see how many characters they have left.

In the same way we started with <input> instead of <textarea>, it is very easy to think that adding the maxlength attribute would solve our issue. That is one way to limit the amount of characters the user types, it uses the browser’s built-in validation, but it is not able to display how many characters are left.

We started with the HTML, then added the CSS, now it is time for some JavaScript. As we’ve seen, charging along like a bull in a china shop without stopping to consider the right approaches can really slow us down in the long run. Especially in situations where there is a large refactor required to change it. So let’s think about this counter; it needs to update as the user types, so we need to trigger an event when the user types. It then needs to check if the amount of text is already at the maximum length.

So which event handler should we choose?

  • change
    Intuitively, it may make sense to choose the change event. It works on <textarea> and does what it says on the tin. Except, it only triggers when the element loses focus so it wouldn’t update while typing.
  • keypress
    The keypress event is triggered when typing any character, which is a good start. But it does not trigger when characters are deleted, so the counter wouldn’t update after pressing backspace. It also doesn’t trigger after a copy/paste.
  • keyup
    This one gets quite close, it is triggered whenever a key has been pressed (including the backspace button). So it does trigger when deleting characters, but still not after a copy/paste.
  • input
    This is the one we want. This triggers whenever a character is added, deleted or pasted.

This is another good example of how using our intuition just isn’t enough sometimes. There are so many quirks (especially in JavaScript!) that are all important to consider before getting started. So the code to add a counter that updates needs to update a counter (which we’ve done with a span that has a class called counter) by adding an input event handler to the <textarea>. The maximum amount of characters is set in a variable called maxLength and added to the HTML, so if the value is changed it is changed in only one place.

var textEl = document.querySelector('textarea')
var counterEl = document.querySelector('.counter')
var maxLength = 200
    
textEl.setAttribute('maxlength', maxLength)
textEl.addEventListener('input', (val) => 
var count = textEl.value.length
counterEl.innerHTML = $count/$maxLength
})

Browser Compatibility And Progressive Enhancement

Progressive enhancement is a mindset in which we understand that we have no control over what the user exactly sees on their screen, and instead, we try to guide the browser. Responsive Web Design is a good example, where we build a website that adjusts to suit the content on the particular size viewport without manually setting what each size would look like. It means that on the one hand, we strongly care that a website works across all browsers and devices, but on the other hand, we don’t care that they look exactly the same.

Currently, we are missing a trick. We haven’t set a sensible default for the counter. The default is currently “0/200” if 200 were the maximum length; this kind of makes sense but has two downsides. The first, it doesn’t really make sense at first glance. You need to start typing before it is obvious the 0 updates as you type. The other downside is that the 0 updates as you type, meaning if the JavaScript event doesn’t trigger properly (maybe the script did not download correctly or uses JavaScript that an old browser doesn’t support such as the double arrow in the code above) then it won’t do anything. A better way would be to think carefully beforehand. How would we go about making it useful when it is both working and when it isn’t?

In this case, we could make the default text be “200 character limit.” This would mean that without any JavaScript at all, the user would always see the character limit but it just wouldn’t feedback about how close they are to the limit. However, when the JavaScript is working, it would update as they type and could say “200 characters remaining” instead. It is a very subtle change but means that although two users could get different experiences, neither are getting an experience that feels broken.

Another default that we could set is the maxlength on the element itself rather than afterwards with JavaScript. Without doing this, the baseline version (the one without JS) would be able to type past the limit.

User Testing

It’s all very well testing on various browsers and thinking about the various permutations of how devices could serve the website in a different way, but are users able to use it?

Generally speaking, no. I’m consistently shocked by user testing; people never use a site how you expect them to. This means that user testing is crucial.

It’s quite hard to simulate a user test session in an article, so for the purposes of this article, I’m going to just focus on one point that I’ve seen users struggle with on various projects.

The user is happily writing away, gets to 0 characters remaining, and then gets stuck. They forget what they were writing, or they don’t notice that it had stopped typing.

This happens because there is nothing telling the user that something has changed; if they are typing away without paying much attention, then they can hit the maximum length without noticing. This is a frustrating experience.

One way to solve this issue is to allow overtyping, so the maximum length still counts for it to be valid when submitted but it allows the user to type as much as they want and then edit it before submission. This is a good solution as it gives the control back to the user.

Okay, so how do we implement overtyping? Instead of jumping into the code, let’s step through in theory. maxlength doesn’t allow overtyping, it just stops allowing input once it hits the limit. So we need to remove maxlength and write a JS equivalent. We can use the input event handler as we did before, as we know that works on paste, etc. So in that event, the handler would check if the user has typed more than the limit, and if so, the counter text could change to say “10 characters too many.” The baseline version (without the JS) would no longer have a limit at all, so a useful middle ground could be to add the maxlength to the element in the HTML and remove the attribute using JavaScript.

That way, the user would see that they are over the limit without being cut off while typing. There would still need to be validation to make sure it isn’t submitted, but that is worth the extra small bit of work to make the user experience far better.


An example showing “17 characters too many” in red text next to a <code>&lt;textarea&gt;</code>.


Allowing the user to overtype (Large preview)

Designing The Overtype

This gets us to quite a solid position: the user is now able to use any device and get a decent experience. If they type too much it is not going to cut them off; instead, it will just allow it and encourage them to edit it down.

There’s a variety of ways this could be designed differently, so let’s look at how Twitter handles it:


A screenshot from Twitter showing their textarea with overtyped text with a red background.


Twitter’s <textarea> (Large preview)

Twitter has been iterating its main tweet <textarea> since they started the company. The current version uses a lot of techniques that we could consider using.

As you type on Twitter, there is a circle that completes once you get to the character limit of 280. Interestingly, it doesn’t say how many characters are available until you are 20 characters away from the limit. At that point, the incomplete circle turns orange. Once you have 0 characters remaining, it turns red. After the 0 characters, the countdown goes negative; it doesn’t appear to have a limit on how far you can overtype (I tried as far as 4,000 characters remaining) but the tweet button is disabled while overtyping.

So this works the same way as our <textarea> does, with the main difference being the characters represented by a circle that updates and shows the number of characters remaining after 260 characters. We could implement this by removing the text and replacing it with an SVG circle.

The other thing that Twitter does is add a red background behind the overtyped text. This makes it completely obvious that the user is going to need to edit or remove some of the text to publish the tweet. It is a really nice part of the design. So how would we implement that? We would start again from the beginning.

You remember the part where we realized that a basic input text box would not give us multiline? And that a maxlength attribute would not give us the ability to overtype? This is one of those cases. As far as I know, there is nothing in CSS that gives us the ability to style parts of the text inside a <textarea>. This is the point where some people would suggest web components, as what we would need is a pretend <textarea>. We would need some kind of element — probably a div — with contenteditable on it and in JS we would need to wrap the overtyped text in a span that is styled with CSS.

What would the baseline non-JS version look like then? Well, it wouldn’t work at all because while contenteditable will work without JS, we would have no way to actually do anything with it. So we would need to have a <textarea> by default and remove that if JS is available. We would also need to do a lot of accessibility testing because while we can trust a <textarea> to be accessible relying on browser features is a much safer bet than building your own components. How does Twitter handle it? You may have seen it; if you are on a train and your JavaScript doesn’t load while going into a tunnel then you get chucked into a decade-old legacy version of Twitter where there is no character limit at all.

What happens then if you tweet over the character limit? Twitter reloads the page with an error message saying “Your Tweet was over the character limit. You’ll have to be more clever.” No, Twitter. You need to be more clever.

Retro

The only way to conclude this dramatization is a retrospective. What went well? What did we learn? What would we do differently next time or what would we change completely?

We started very simple with a basic textbox; in some ways, this is good because it can be all too easy to overcomplicate things from the beginning and an MVP approach is good. However, as time went on, we realized how important it is to have some critical thinking and to consider what we are doing. We should have known a basic textbox wouldn’t be enough and that a way of setting a maximum length would be useful. It is even possible that if we have conducted or sat in on user research sessions in the past that we could have anticipated the need to allow overtyping. As for the browser compatibility and user experiences across devices, considering progressive enhancement from the beginning would have caught most of those potential issues.

So one change we could make is to be much more proactive about the thinking process instead of jumping straight into the task, thinking that the code is easy when actually the code is the least important part.

On a similar vein to that, we had the “scope creep” of maxlength, and while we could possibly have anticipated that, we would rather not have any scope creep at all. So everybody involved from the beginning would be very useful, as a diverse multidisciplinary approach to even small tasks like this can seriously reduce the time it takes to figure out and fix all the unexpected tweaks.

Back To The Real World

Okay, so I can get quite deep into this made-up project, but I think it demonstrates well how complicated the most seemingly simple tasks can be. Being user-focussed, having a progressive enhancement mindset, and thinking things through from the beginning can have a real impact on both the speed and quality of delivery. And I didn’t even mention testing!

I went into some detail about the history of the <textarea> and which event listeners to use, some of this can seem overkill, but I find it fascinating to gain a real understanding of the subtleties of the web, and it can often help demystify issues we will face in the future.

Smashing Editorial
(ra, il)


See the original post – 

Designing A Textbox, Unabridged

Thumbnail

Logging Activity With The Web Beacon API




Logging Activity With The Web Beacon API

Drew McLellan



The Beacon API is a JavaScript-based Web API for sending small amounts of data from the browser to the web server without waiting for a response. In this article, we’ll look at what that can be useful for, what makes it different from familiar techniques like XMLHTTPRequest (‘Ajax’), and how you can get started using it.

If you know why you want to use Beacon already, feel free to jump directly to the Getting Started section.

What Is The Beacon API For?

The Beacon API is used for sending small amounts of data to a server without waiting for a response. That last part is critical and is the key to why Beacon is so useful — our code never even gets to see a response, even if the server sends one. Beacons are specifically for sending data and then forgetting about it. We don’t expect a response and we don’t get a response.

Think of it like a postcard sent home when on vacation. You put a small amount of data on it (a bit of “Wish you were here” and “The weather’s been lovely”), put it in the mailbox, and you don’t expect a response. No one sends a return postcard saying “Yes, I do wish I was there actually, thank you very much!”

For modern websites and applications, there’s a number of use cases that fall very neatly into this pattern of send-and-forget.

Tracking Stats And Analytics Data

The first use case that comes to mind for most people is analytics. Big solutions like Google Analytics might give a good overview of things like page visits, but what if we wanted something more customized? We could write some JavaScript to track what’s happening in a page (maybe how a user interacts with a component, how far they’ve scrolled to, or which articles have been displayed before they follow a CTA) but we then need to send that data to the server when the user leaves the page. Beacon is perfect for this, as we’re just logging the data and don’t need a response.

There’s no reason we couldn’t also cover the sort of mundane tasks often handled by Google Analytics, reporting on the user themselves and the capability of their device and browser. If the user has a logged in session, you could even tie those stats back to a known individual. Whatever data you gather, you can send it back to the server with Beacon.

Debugging And Logging

Another useful application for this behavior is logging information from your JavaScript code. Imagine you have a complex interactive component on your page that works perfectly for all your tests, but occasionally fails in production. You know it’s failing, but you can’t see the error in order to begin debugging it. If you can detect a failure in the code itself, you could then gather up diagnostics and use Beacon to send it all back for logging.

In fact, any logging task can usefully be performed using Beacon, be that creating save-points in a game, collecting information on feature use, or recording results from a multivariate test. If it’s something that happens in the browser that you want the server to know about, then Beacon is likely a contender.

Can’t We Already Do This?

I know what you’re thinking. None of this is new, is it? We’ve been able to communicate from the browser to the server using XMLHTTPRequest for more than a decade. More recently we also have the Fetch API which does much the same thing with a more modern promise-based interface. Given that, why do we need the Beacon API at all?

The key here is that because we don’t get a response, the browser can queue up the request and send it without blocking execution of any other code. As far as the browser is concerned, it doesn’t matter if our code is still running or not, or where the script execution has got to, as there’s nothing to return it can just background the sending of the HTTP request until it’s convenient to send it.

That might mean waiting until CPU load is lower, or until the network is free, or even just sending it right away if it can. The important thing is that the browser queues the beacon and returns control immediately. It does not hold things up while the beacon sends.

To understand why this is a big deal, we need to look at how and when these sorts of requests are issued from our code. Take our example of an analytics logging script. Our code may be timing how long the users spend on a page, so it becomes critical that the data is sent back to the server at the last possible moment. When the user goes to leave a page, we want to stop timing and send the data back home.

Typically, you’d use either the unload or beforeunload event to execute the logging. These are fired when the user does something like following a link on the page to navigate away. The trouble here is that code running on one of the unload events can block execution and delay the unloading of the page. If unloading of the page is delayed, then the loading next page is also delayed, and so the experience feels really sluggish.

Keep in mind how slow HTTP requests can be. If you’re thinking about performance, typically one of the main factors you try to cut down on is extra HTTP requests because going out to the network and getting a response can be super slow. The very last thing you want to do is put that slowness between the activation of a link and the start of the request for the next page.

Beacon gets around this by queuing the request without blocking, returning control immediately back to your script. The browser then takes care of sending that request in the background without blocking. This makes everything much faster, which makes users happier and lets us all keep our jobs.

Getting Started

So we understand what Beacon is, and why we might use it, so let’s get started with some code. The basics couldn’t be simpler:

let result = navigator.sendBeacon(url, data);

The result is boolean, true if the browser accepted and queued the request, and false if there was a problem in doing so.

Using navigator.sendBeacon()

navigator.sendBeacon takes two parameters. The first is the URL to make the request to. The request is performed as an HTTP POST, sending any data provided in the second parameter.

The data parameter can be in one of several formats, all if which are taken directly from the Fetch API. This can be a Blob, a BufferSource, FormData or URLSearchParams — basically any of the body types used when making a request with Fetch.

I like using FormData for basic key-value data as it’s uncomplicated and easy to read back.

// URL to send the data to
let url = '/api/my-endpoint';
    
// Create a new FormData and add a key/value pair
let data = new FormData();
data.append('hello', 'world');
    
let result = navigator.sendBeacon(url, data);
    
if (result)  
  console.log('Successfully queued!');
 else 
  console.log('Failure.');

Browser Support

Support in browsers for Beacon is very good, with the only notable exceptions being Internet Explorer (works in Edge) and Opera Mini. For most uses, that should be fine, but it’s worth testing for support before trying to use navigator.sendBeacon.

That’s easy to do:

if (navigator.sendBeacon) 
  // Beacon code
 else 
  // No Beacon. Maybe fall back to XHR?

If Beacon isn’t available and your request is important, you could fall back to a blocking method such as XHR. Depending on your audience and purpose, you might equally choose to not bother.

An Example: Logging Time On A Page

To see this in practice, let’s create a basic system to time how long a user stays on a page. When the page loads we’ll note the time, and when the user leaves the page we’ll send the start time and current time to the server.

As we only care about time spent (not the actual time of day) we can use performance.now() to get a basic timestamp as the page loads:

let startTime = performance.now();

If we wrap up our logging into a function, we can call it when the page unloads.

let logVisit = function() 
  // Test that we have support
  if (!navigator.sendBeacon) return true;
      
  // URL to send the data to, e.g.
  let url = '/api/log-visit';
      
  // Data to send
  let data = new FormData();
  data.append('start', startTime);
  data.append('end', performance.now());
  data.append('url', document.URL);
      
  // Let's go!
  navigator.sendBeacon(url, data);
;

Finally, we need to call this function when the user leaves the page. My first instinct was to use the unload event, but Safari on a Mac seems to block the request with a security warning, so beforeunload works just fine for us here.

window.addEventListener('beforeunload', logVisit);

When the page unloads (or, just before it does) our logVisit() function will be called and provided the browser supports the Beacon API our beacon will be sent.

(Note that if there is no Beacon support, we return true and pretend it all worked great. Returning false would cancel the event and stop the page unloading. That would be unfortunate.)

Considerations When Tracking

As so many of the potential uses for Beacon revolve around tracking of activity, I think it would be remiss not to mention the social and legal responsibilities we have as developers when logging and tracking activity that could be tied back to users.

GDPR

We may think of the recent European GDPR laws as they related to email, but of course, the legislation relates to storing any type of personal data. If you know who your users are and can identify their sessions, then you should check what activity you are logging and how it relates to your stated policies.

Often we don’t need to track as much data as our instincts as developers tell us we should. It can be better to deliberately not store information that would identify a user, and then you reduce your likelihood of getting things wrong.

DNT: Do Not Track

In addition to legal requirements, most browsers have a setting to enable the user to express a desire not to be tracked. Do Not Track sends an HTTP header with the request that looks like this:

DNT: 1

If you’re logging data that can track a specific user and the user sends a positive DNT header, then it would be best to follow the user’s wishes and anonymize that data or not track it at all.

In PHP, for example, you can very easily test for this header like so:

if (!empty($_SERVER['HTTP_DNT']))  
  // User does not wish to be tracked ... 

In Conclusion

The Beacon API is a really useful way to send data from a page back to the server, particularly in a logging context. Browser support is very broad, and it enables you to seamlessly log data without negatively impacting the user’s browsing experience and the performance of your site. The non-blocking nature of the requests means that the performance is much faster than alternatives such as XHR and Fetch.

If you’d like to read more about the Beacon API, the following sites are worth a look.

Smashing Editorial
(ra, il)


More:  

Logging Activity With The Web Beacon API

JavaScript Graphics and Animation

Full-day workshop • June 28th In this workshop, Seb will demonstrate a variety of beautiful visual effects using JavaScript and HTML5 canvas. You will learn animation and graphics techniques that you can use to add a sense of dynamism to your projects.
Seb demystifies programming and explores its artistic possibilities. His presentations and workshops enable artists to overcome their fear of code and encourage programmers of all backgrounds to be more creative and imaginative.

Read More: 

JavaScript Graphics and Animation

Thumbnail

Keeping Node.js Fast: Tools, Techniques, And Tips For Making High-Performance Node.js Servers




Keeping Node.js Fast: Tools, Techniques, And Tips For Making High-Performance Node.js Servers

David Mark Clements



If you’ve been building anything with Node.js for long enough, then you’ve no doubt experienced the pain of unexpected speed issues. JavaScript is an evented, asynchronous language. That can make reasoning about performance tricky, as will become apparent. The surging popularity of Node.js has exposed the need for tooling, techniques and thinking suited to the constraints of server-side JavaScript.

When it comes to performance, what works in the browser doesn’t necessarily suit Node.js. So, how do we make sure a Node.js implementation is fast and fit for purpose? Let’s walk through a hands-on example.

Tools

Node is a very versatile platform, but one of the predominant applications is creating networked processes. We’re going to focus on profiling the most common of these: HTTP web servers.

We’ll need a tool that can blast a server with lots of requests while measuring the performance. For example, we can use AutoCannon:

npm install -g autocannon

Other good HTTP benchmarking tools include Apache Bench (ab) and wrk2, but AutoCannon is written in Node, provides similar (or sometimes greater) load pressure, and is very easy to install on Windows, Linux, and Mac OS X.

After we’ve established a baseline performance measurement, if we decide our process could be faster we’ll need some way to diagnose problems with the process. A great tool for diagnosing various performance issues is Node Clinic, which can also be installed with npm:

npm --install -g clinic

This actually installs a suite of tools. We’ll be using Clinic Doctor and Clinic Flame (a wrapper around 0x) as we go.

Note: For this hands-on example we’ll need Node 8.11.2 or higher.

The Code

Our example case is a simple REST server with a single resource: a large JSON payload exposed as a GET route at /seed/v1. The server is an app folder which consists of a package.json file (depending on restify 7.1.0), an index.js file and a util.js file.

The index.js file for our server looks like so:

'use strict'

const restify = require('restify')
const  etagger, timestamp, fetchContent  = require('./util')()
const server = restify.createServer()

server.use(etagger().bind(server))

server.get('/seed/v1', function (req, res, next) 
  fetchContent(req.url, (err, content) => 
    if (err) return next(err)
    res.send(data: content, url: req.url, ts: timestamp())
    next()
  })
})

server.listen(3000)

This server is representative of the common case of serving client-cached dynamic content. This is achieved with the etagger middleware, which calculates an ETag header for the latest state of the content.

The util.js file provides implementation pieces that would commonly be used in such a scenario, a function to fetch the relevant content from a backend, the etag middleware and a timestamp function that supplies timestamps on a minute-by-minute basis:

'use strict'

require('events').defaultMaxListeners = Infinity
const crypto = require('crypto')

module.exports = () => 
  const content = crypto.rng(5000).toString('hex')
  const ONE_MINUTE = 60000
  var last = Date.now()

  function timestamp () 
    var now = Date.now()
    if (now — last >= ONE_MINUTE) last = now
    return last
  
  
  function etagger () 
    var cache = 
    var afterEventAttached = false
    function attachAfterEvent (server) 
      if (attachAfterEvent === true) return
      afterEventAttached = true
      server.on('after', (req, res) => 
        if (res.statusCode !== 200) return
        if (!res._body) return
        const key = crypto.createHash('sha512')
          .update(req.url)
          .digest()
          .toString('hex')
        const etag = crypto.createHash('sha512')
          .update(JSON.stringify(res._body))
          .digest()
          .toString('hex')
        if (cache[key] !== etag) cache[key] = etag
      )
    }
    return function (req, res, next) 
      attachAfterEvent(this)
      const key = crypto.createHash('sha512')
        .update(req.url)
        .digest()
        .toString('hex')
      if (key in cache) res.set('Etag', cache[key])
      res.set('Cache-Control', 'public, max-age=120')
      next()
    
  }

  function fetchContent (url, cb) 
    setImmediate(() => 
      if (url !== '/seed/v1') cb(Object.assign(Error('Not Found'), statusCode: 404))
      else cb(null, content)
    })
  }

  return  timestamp, etagger, fetchContent 
  
}

By no means take this code as an example of best practices! There are multiple code smells in this file, but we’ll locate them as we measure and profile the application.

To get the full source for our starting point, the slow server can be found over here.

Profiling

In order to profile, we need two terminals, one for starting the application, and the other for load testing it.

In one terminal, within the app, folder we can run:

node index.js

In another terminal we can profile it like so:

autocannon -c100 localhost:3000/seed/v1

This will open 100 concurrent connections and bombard the server with requests for ten seconds.

The results should be something similar to the following (Running 10s test @ http://localhost:3000/seed/v1 — 100 connections):

Stat Avg Stdev Max
Latency (ms) 3086.81 1725.2 5554
Req/Sec 23.1 19.18 65
Bytes/Sec 237.98 kB 197.7 kB 688.13 kB

231 requests in 10s, 2.4 MB read

Results will vary depending on the machine. However, considering that a “Hello World” Node.js server is easily capable of thirty thousand requests per second on that machine that produced these results, 23 requests per second with an average latency exceeding 3 seconds is dismal.

Diagnosing

Discovering The Problem Area

We can diagnose the application with a single command, thanks to Clinic Doctor’s –on-port command. Within the app folder we run:

clinic doctor --on-port=’autocannon -c100 localhost:$PORT/seed/v1’ -- node index.js

This will create an HTML file that will automatically open in our browser when profiling is complete.

The results should look something like the following:


Clinic Doctor has detected an Event Loop issue


Clinic Doctor results

The Doctor is telling us that we have probably had an Event Loop issue.

Along with the message near the top of the UI, we can also see that the Event Loop chart is red, and shows a constantly increasing delay. Before we dig deeper into what this means, let’s first understand the effect the diagnosed issue is having on the other metrics.

We can see the CPU is consistently at or above 100% as the process works hard to process queued requests. Node’s JavaScript engine (V8) actually uses two CPU cores. One for the Event Loop and the other for Garbage Collection. When we see the CPU spiking up to 120% in some cases, the process is collecting objects related to handled requests.

We see this correlated in the Memory graph. The solid line in the Memory chart is the Heap Used metric. Any time there’s a spike in CPU we see a fall in the Heap Used line, showing that memory is being deallocated.

Active Handles are unaffected by the Event Loop delay. An active handle is an object that represents either I/O (such as a socket or file handle) or a timer (such as a setInterval). We instructed AutoCannon to open 100 connections (-c100). Active handles stay a consistent count of 103. The other three are handles for STDOUT, STDERR, and the handle for the server itself.

If we click the Recommendations panel at the bottom of the screen, we should see something like the following:


Clinic Doctor recommendations panel opened


Viewing issue specific recommendations

Short-Term Mitigation

Root cause analysis of serious performance issues can take time. In the case of a live deployed project, it’s worth adding overload protection to servers or services. The idea of overload protection is to monitor event loop delay (among other things), and respond with “503 Service Unavailable” if a threshold is passed. This allows a load balancer to fail over to other instances, or in the worst case means users will have to refresh. The overload-protection module can provide this with minimum overhead for Express, Koa, and Restify. The Hapi framework has a load configuration setting which provides the same protection.

Understanding The Problem Area

As the short explanation in Clinic Doctor explains, if the Event Loop is delayed to the level that we’re observing it’s very likely that one or more functions are “blocking” the Event Loop.

It’s especially important with Node.js to recognize this primary JavaScript characteristic: asynchronous events cannot occur until currently executing code has completed.

This is why a setTimeout cannot be precise.

For instance, try running the following in a browser’s DevTools or the Node REPL:

console.time('timeout')
setTimeout(console.timeEnd, 100, 'timeout')
let n = 1e7
while (n--) Math.random()

The resulting time measurement will never be 100ms. It will likely be in the range of 150ms to 250ms. The setTimeout scheduled an asynchronous operation (console.timeEnd), but the currently executing code has not yet complete; there are two more lines. The currently executing code is known as the current “tick.” For the tick to complete, Math.random has to be called ten million times. If this takes 100ms, then the total time before the timeout resolves will be 200ms (plus however long it takes the setTimeout function to actually queue the timeout beforehand, usually a couple of milliseconds).

In a server-side context, if an operation in the current tick is taking a long time to complete requests cannot be handled, and data fetching cannot occur because asynchronous code will not be executed until the current tick has completed. This means that computationally expensive code will slow down all interactions with the server. So it’s recommended to split out resource intense work into separate processes and call them from the main server, this will avoid cases where on rarely used but expensive route slows down the performance of other frequently used but inexpensive routes.

The example server has some code that is blocking the Event Loop, so the next step is to locate that code.

Analyzing

One way to quickly identify poorly performing code is to create and analyze a flame graph. A flame graph represents function calls as blocks sitting on top of each other — not over time but in aggregate. The reason it’s called a ‘flame graph’ is because it typically uses an orange to red color scheme, where the redder a block is the “hotter” a function is, meaning, the more it’s likely to be blocking the event loop. Capturing data for a flame graph is conducted through sampling the CPU — meaning that a snapshot of the function that is currently being executed and it’s stack is taken. The heat is determined by the percentage of time during profiling that a given function is at the top of the stack (e.g. the function currently being executed) for each sample. If it’s not the last function to ever be called within that stack, then it’s likely to be blocking the event loop.

Let’s use clinic flame to generate a flame graph of the example application:

clinic flame --on-port=’autocannon -c100 localhost:$PORT/seed/v1’ -- node index.js

The result should open in our browser with something like the following:


Clinic’s flame graph shows that server.on is the bottleneck


Clinic’s flame graph visualization

The width of a block represents how much time it spent on CPU overall. Three main stacks can be observed taking up the most time, all of them highlighting server.on as the hottest function. In truth, all three stacks are the same. They diverge because during profiling optimized and unoptimized functions are treated as separate call frames. Functions prefixed with a * are optimized by the JavaScript engine, and those prefixed with a ~ are unoptimized. If the optimized state isn’t important to us, we can simplify the graph further by pressing the Merge button. This should lead to view similar to the following:


Merged flame graph


Merging the flame graph

From the outset, we can infer that the offending code is in the util.js file of the application code.

The slow function is also an event handler: the functions leading up to the function are part of the core events module, and server.on is a fallback name for an anonymous function provided as an event handling function. We can also see that this code isn’t in the same tick as code that actually handles the request. If there were functions in the core, http, net, and stream would be in the stack.

Such core functions can be found by expanding other, much smaller, parts of the flame graph. For instance, try using the search input on the top right of the UI to search for send (the name of both restify and http internal methods). It should be on the right of the graph (functions are alphabetically sorted):


Flame graph has two small blocks highlighted which represent HTTP processing function


Searching the flame graph for HTTP processing functions

Notice how comparatively small all the actual HTTP handling blocks are.

We can click one of the blocks highlighted in cyan which will expand to show functions like writeHead and write in the http_outgoing.js file (part of Node core http library):


Flame graph has zoomed into a different view showing HTTP related stacks


Expanding the flame graph into HTTP relevant stacks

We can click all stacks to return to the main view.

The key point here is that even though the server.on function isn’t in the same tick as the actual request handling code, it’s still affecting the overall server performance by delaying the execution of otherwise performant code.

Debugging

We know from the flame graph that the problematic function is the event handler passed to server.on in the util.js file.

Let’s take a look:

server.on('after', (req, res) => 
  if (res.statusCode !== 200) return
  if (!res._body) return
  const key = crypto.createHash('sha512')
    .update(req.url)
    .digest()
    .toString('hex')
  const etag = crypto.createHash('sha512')
    .update(JSON.stringify(res._body))
    .digest()
    .toString('hex')
  if (cache[key] !== etag) cache[key] = etag
)

It’s well known that cryptography tends to be expensive, as does serialization (JSON.stringify) but why don’t they appear in the flame graph? These operations are in the captured samples, but they’re hidden behind the cpp filter. If we press the cpp button we should see something like the following:


Additional blocks related to C++ have been revealed in the flame graph (main view)


Revealing serialization and cryptography C++ frames

The internal V8 instructions relating to both serialization and cryptography are now shown as the hottest stacks and as taking up most of the time. The JSON.stringify method directly calls C++ code; this is why we don’t see a JavaScript function. In the cryptography case, functions like createHash and update are in the data, but they are either inlined (which means they disappear in the merged view) or too small to render.

Once we start to reason about the code in the etagger function it can quickly become apparent that it’s poorly designed. Why are we taking the server instance from the function context? There’s a lot of hashing going on, is all of that necessary? There’s also no If-None-Match header support in the implementation which would mitigate some of the load in some real-world scenarios because clients would only make a head request to determine freshness.

Let’s ignore all of these points for the moment and validate the finding that the actual work being performed in server.on is indeed the bottleneck. This can be achieved by setting the server.on code to an empty function and generating a new flamegraph.

Alter the etagger function to the following:

function etagger () 
  var cache = 
  var afterEventAttached = false
  function attachAfterEvent (server) 
    if (attachAfterEvent === true) return
    afterEventAttached = true
    server.on('after', (req, res) => )
  }
  return function (req, res, next) 
    attachAfterEvent(this)
    const key = crypto.createHash('sha512')
      .update(req.url)
      .digest()
      .toString('hex')
    if (key in cache) res.set('Etag', cache[key])
    res.set('Cache-Control', 'public, max-age=120')
    next()
  
}

The event listener function passed to server.on is now a no-op.

Let’s run clinic flame again:

clinic flame --on-port='autocannon -c100 localhost:$PORT/seed/v1' -- node index.js

This should produce a flame graph similar to the following:


Flame graph shows that Node.js event system stacks are still the bottleneck


Flame graph of the server when server.on is an empty function

This looks better, and we should have noticed an increase in request per second. But why is the event emitting code so hot? We would expect at this point for the HTTP processing code to take up the majority of CPU time, there’s nothing executing at all in the server.on event.

This type of bottleneck is caused by a function being executed more than it should be.

The following suspicious code at the top of util.js may be a clue:

require('events').defaultMaxListeners = Infinity

Let’s remove this line and start our process with the --trace-warnings flag:

node --trace-warnings index.js

If we profile with AutoCannon in another terminal, like so:

autocannon -c100 localhost:3000/seed/v1

Our process will output something similar to:

(node:96371) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 after listeners added. Use emitter.setMaxListeners() to increase limit
  at _addListener (events.js:280:19)
  at Server.addListener (events.js:297:10)
  at attachAfterEvent 
    (/Users/davidclements/z/nearForm/keeping-node-fast/slow/util.js:22:14)
  at Server.
    (/Users/davidclements/z/nearForm/keeping-node-fast/slow/util.js:25:7)
  at call
    (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/chain.js:164:9)
  at next
    (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/chain.js:120:9)
  at Chain.run
    (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/chain.js:123:5)
  at Server._runUse
    (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/server.js:976:19)
  at Server._runRoute
    (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/server.js:918:10)
  at Server._afterPre
    (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/server.js:888:10)

Node is telling us that lots of events are being attached to the server object. This is strange because there’s a boolean that checks if the event has been attached and then returns early essentially making attachAfterEvent a no-op after the first event is attached.

Let’s take a look at the attachAfterEvent function:

var afterEventAttached = false
function attachAfterEvent (server) 
  if (attachAfterEvent === true) return
  afterEventAttached = true
  server.on('after', (req, res) => )
}

The conditional check is wrong! It checks whether attachAfterEvent is true instead of afterEventAttached. This means a new event is being attached to the server instance on every request, and then all prior attached events are being fired after each request. Whoops!

Optimizing

Now that we’ve discovered the problem areas, let’s see if we can make the server faster.

Low-Hanging Fruit

Let’s put the server.on listener code back (instead of an empty function) and use the correct boolean name in the conditional check. Our etagger function looks as follows:

function etagger () 
  var cache = 
  var afterEventAttached = false
  function attachAfterEvent (server) 
    if (afterEventAttached === true) return
    afterEventAttached = true
    server.on('after', (req, res) => 
      if (res.statusCode !== 200) return
      if (!res._body) return
      const key = crypto.createHash('sha512')
        .update(req.url)
        .digest()
        .toString('hex')
      const etag = crypto.createHash('sha512')
        .update(JSON.stringify(res._body))
        .digest()
        .toString('hex')
      if (cache[key] !== etag) cache[key] = etag
    )
  }
  return function (req, res, next) 
    attachAfterEvent(this)
    const key = crypto.createHash('sha512')
      .update(req.url)
      .digest()
      .toString('hex')
    if (key in cache) res.set('Etag', cache[key])
    res.set('Cache-Control', 'public, max-age=120')
    next()
  
}

Now we check our fix by profiling again. Start the server in one terminal:

node index.js

Then profile with AutoCannon:

autocannon -c100 localhost:3000/seed/v1

We should see results somewhere in the range of a 200 times improvement (Running 10s test @ http://localhost:3000/seed/v1 — 100 connections):

Stat Avg Stdev Max
Latency (ms) 19.47 4.29 103
Req/Sec 5011.11 506.2 5487
Bytes/Sec 51.8 MB 5.45 MB 58.72 MB

50k requests in 10s, 519.64 MB read

It’s important to balance potential server cost reductions with development costs. We need to define, in our own situational contexts, how far we need to go in optimizing a project. Otherwise, it can be all too easy to put 80% of the effort into 20% of the speed enhancements. Do the constraints of the project justify this?

In some scenarios, it could be appropriate to achieve a 200 times improvement with a low hanging fruit and call it a day. In others, we may want to make our implementation as fast as it can possibly be. It really depends on project priorities.

One way to control resource spend is to set a goal. For instance, 10 times improvement, or 4000 requests per second. Basing this on business needs makes the most sense. For instance, if server costs are 100% over budget, we can set a goal of 2x improvement.

Taking It Further

If we produce a new flame graph of our server, we should see something similar to the following:


Flame graph still shows server.on as the bottleneck, but a smaller bottleneck


Flame graph after the performance bug fix has been made

The event listener is still the bottleneck, it’s still taking up one-third of CPU time during profiling (the width is about one third the whole graph).

What additional gains can be made, and are the changes (along with their associated disruption) worth making?

With an optimized implementation, which is nonetheless slightly more constrained, the following performance characteristics can be achieved (Running 10s test @ http://localhost:3000/seed/v1 — 10 connections):

Stat Avg Stdev Max
Latency (ms) 0.64 0.86 17
Req/Sec 8330.91 757.63 8991
Bytes/Sec 84.17 MB 7.64 MB 92.27 MB

92k requests in 11s, 937.22 MB read

While a 1.6x improvement is significant, it arguable depends on the situation whether the effort, changes, and code disruption necessary to create this improvement are justified. Especially when compared to the 200x improvement on the original implementation with a single bug fix.

To achieve this improvement, the same iterative technique of profile, generate flamegraph, analyze, debug, and optimize was used to arrive at the final optimized server, the code for which can be found here.

The final changes to reach 8000 req/s were:

These changes are slightly more involved, a little more disruptive to the code base, and leave the etagger middleware a little less flexible because it puts the burden on the route to provide the Etag value. But it achieves an extra 3000 requests per second on the profiling machine.

Let’s take a look at a flame graph for these final improvements:


Flame graph shows that internal code related to the net module is now the bottleneck


Healthy flame graph after all performance improvements

The hottest part of the flame graph is part of Node core, in the net module. This is ideal.

Preventing Performance Problems

To round off, here are some suggestions on ways to prevent performance issues in before they are deployed.

Using performance tools as informal checkpoints during development can filter out performance bugs before they make it into production. Making AutoCannon and Clinic (or equivalents) part of everyday development tooling is recommended.

When buying into a framework, find out what it’s policy on performance is. If the framework does not prioritize performance, then it’s important to check whether that aligns with infrastructural practices and business goals. For instance, Restify has clearly (since the release of version 7) invested in enhancing the library’s performance. However, if low cost and high speed is an absolute priority, consider Fastify which has been measured as 17% faster by a Restify contributor.

Watch out for other widely impacting library choices — especially consider logging. As developers fix issues, they may decide to add additional log output to help debug related problems in the future. If an unperformant logger is used, this can strangle performance over time after the fashion of the boiling frog fable. The pino logger is the fastest newline delimited JSON logger available for Node.js.

Finally, always remember that the Event Loop is a shared resource. A Node.js server is ultimately constrained by the slowest logic in the hottest path.

Smashing Editorial
(rb, ra, il)


Continue at source:  

Keeping Node.js Fast: Tools, Techniques, And Tips For Making High-Performance Node.js Servers

Automating Your Feature Testing With Selenium WebDriver




Automating Your Feature Testing With Selenium WebDriver

Nils Schütte



This article is for web developers who wish to spend less time testing the front end of their web applications but still want to be confident that every feature works fine. It will save you time by automating repetitive online tasks with Selenium WebDriver. You will find a step-by-step example for automating and testing the login function of WordPress, but you can also adapt the example for any other login form.

What Is Selenium And How Can It Help You?

Selenium is a framework for the automated testing of web applications. Using Selenium, you can basically automate every task in your browser as if a real person were to execute the task. The interface used to send commands to the different browsers is called Selenium WebDriver. Implementations of this interface are available for every major browser, including Mozilla Firefox, Google Chrome and Internet Explorer.

Automating Your Feature Testing With Selenium WebDriver

Which type of web developer are you? Are you the disciplined type who tests all key features of your web application after each deployment. If so, you are probably annoyed by how much time this repetitive testing consumes. Or are you the type who just doesn’t bother with testing key features and always thinks, “I should test more, but I’d rather develop new stuff.” If so, you probably only find bugs by chance or when your client or boss complains about them.

I have been working for a well-known online retailer in Germany for quite a while, and I always belonged to the second category: It was so exciting to think of new features for the online shop, and I didn’t like at all going over all of the previous features again after each new software deployment. So, the strategy was more or less to hope that all key features would work.

One day, we had a serious drop in our conversion rate and started digging in our web analytics tools to find the source of this drop. It took quite a while before we found out that our checkout did not work properly since the previous software deployment.

This was the day when I started to do some research about automating our testing process of web applications, and I stumbled upon Selenium and its WebDriver. Selenium is basically a framework that allows you to automate web browsers. WebDriver is the name of the key interface that allows you to send commands to all major browsers (mobile and desktop) and work with them as a real user would.

Preparing The First Test With Selenium WebDriver

First, I was a little skeptical of whether Selenium would suit my needs because the framework is most commonly used in Java, and I am certainly not a Java expert. Later, I learned that being a Java expert is not necessary to take advantage of the power of the Selenium framework.

As a simple first test, I tested the login of one of my WordPress projects. Why WordPress? Just because using the WordPress login form is an example that everybody can follow more easily than if I were to refer to some custom web application.

What do you need to start using Selenium WebDriver? Because I decided to use the most common implementation of Selenium in Java, I needed to set up my little Java environment.

If you want to follow my example, you can use the Java environment of your choice. If you haven’t set one up yet, I suggest installing Eclipse and making sure you are able to run a simple “Hello world” script in Java.

Because I wanted to test the login in Chrome, I made sure that the Chrome browser was already installed on my machine. That’s all I did in preparation.

Downloading The ChromeDriver

All major browsers provide their own implementation of the WebDriver interface. Because I wanted to test the WordPress login in Chrome, I needed to get the WebDriver implementation of Chrome: ChromeDriver.

I extracted the ZIP archive and stored the executable file chromedriver.exe in a location that I could remember for later.

Setting Up Our Selenium Project In Eclipse

The steps I took in Eclipse are probably pretty basic to someone who works a lot with Java and Eclipse. But for those like me, who are not so familiar with this, I will go over the individual steps:

  1. Open Eclipse.
  2. Click the “New” icon.
    Creating a new project in Eclipse
    Creating a new project in Eclipse
  3. Choose the wizard to create a new “Java Project,” and click “Next.”
    Chosing the java-project wizard
    Choose the java-project wizard.
  4. Give your project a name, and click “Finish.”
    Eclipse project wizard
    The eclipse project wizard
  5. Now you should see your new Java project on the left side of the screen.
    Java project successfully created
    We successfully created a project to run the Selenium WebDriver.

Adding The Selenium Library To Our Project

Now we have our Java project, but Selenium is still missing. So, next, we need to bring the Selenium framework into our Java project. Here are the steps I took:

  1. Download the latest version of the Java Selenium library.

    Downloading the Selenium library
    Download the Selenium library.
  2. Extract the archive, and store the folder in a place you can remember easily.
  3. Go back to Eclipse, and go to “Project” → “Properties.”
    Eclipse Properties
    Go to properties to integrate the Selenium WebDriver in you project.
  4. In the dialog, go to “Java Build Path” and then to register “Libraries.”
  5. Click on “Add External JARs.”
    Adding the Selenium lib to your Java build path.
    Add the Selenium lib to your Java build path.
  6. Navigate to the just downloaded folder with the Selenium library. Highlight all .jar files and click “Open.”
    Selecting the correct files of the Selenium lib.
    Select all files of the lib to add to your project.
  7. Repeat this for all .jar files in the subfolder libs as well.
  8. Eventually, you should see all .jar files in the libraries of your project:
    Selenium WebDriver framework successfully integrated into your project
    The Selenium WebDriver framework has now been successfully integrated into your project!

That’s it! Everything we’ve done until now is a one-time task. You could use this project now for all of your different tests, and you wouldn’t need to do the whole setup process for every test case again. Kind of neat, isn’t it?

Creating Our Testing Class And Letting It Open the Chrome Browser

Now we have our Selenium project, but what next? To see whether it works at all, I wanted to try something really simple, like just opening my Chrome browser.

To do this, I needed to create a new Java class from which I could execute my first test case. Into this executable class, I copied a few Java code lines, and believe it or not, it worked! Magically, the Chrome browser opened and, after a few seconds, closed all by itself.

Try it yourself:

  1. Click on the “New” button again (while you are in your new project’s folder).
    New class in eclipse
    Create a new class to run the Selenium WebDriver.
  2. Choose the “Class” wizard, and click “Next.”
    New class wizard in eclipse
    Choose the Java class wizard to create a new class.
  3. Name your class (for example, “RunTest”), and click “Finish.”
    Eclipse Java Class wizard
    The eclipse Java Class wizard.
  4. Replace all code in your new class with the following code. The only thing you need to change is the path to chromedriver.exe on your computer:
    import org.openqa.selenium.WebDriver;
    import org.openqa.selenium.chrome.ChromeDriver;
    
    /**
     * @author Nils Schuette via frontendtest.org
     */
    public class RunTest 
        static WebDriver webDriver;
        /**
         * @param args
         * @throws InterruptedException
         */
        public static void main(final String[] args) throws InterruptedException 
            // Telling the system where to find the chrome driver
            System.setProperty(
                    "webdriver.chrome.driver",
                    "C:/PATH/TO/chromedriver.exe");
    
            // Open the Chrome browser
            webDriver = new ChromeDriver();
    
            // Waiting a bit before closing
            Thread.sleep(7000);
    
            // Closing the browser and WebDriver
            webDriver.close();
            webDriver.quit();
        
    }
    
  5. Save your file, and click on the play button to run your code.
    Run Eclipse project
    Running your first Selenium WebDriver project.
  6. If you have done everything correctly, the code should open a new instance of the Chrome browser and close it shortly thereafter.
    Chrome Browser blank window
    The Chrome Browser opens itself magically. (Large preview)

Testing The WordPress Admin Login

Now I was optimistic that I could automate my first little feature test. I wanted the browser to navigate to one of my WordPress projects, login to the admin area and verify that the login was successful. So, what commands did I need to look up?

  1. Navigate to the login form,
  2. Locate the input fields,
  3. Type the username and password into the input fields,
  4. Hit the login button,
  5. Compare the current page’s headline to see if the login was successful.

Again, after I had done all the necessary updates to my code and clicked on the run button in Eclipse, my browser started to magically work itself through the WordPress login. I successfully ran my first automated website test!

If you want to try this yourself, replace all of the code of your Java class with the following. I will go through the code in detail afterwards. Before executing the code, you must replace four values with your own:

  1. The location of your chromedriver.exe file (as above),

  2. The URL of the WordPress admin account that you want to test,

  3. The WordPress username,

  4. The WordPress password.

Then, save and let it run again. It will open Chrome, navigate to the login of your WordPress website, login and check whether the h1 headline of the current page is “Dashboard.”

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;

/**
 * @author Nils Schuette via frontendtest.org
 */
public class RunTest 
    static WebDriver webDriver;
    /**
     * @param args
     * @throws InterruptedException
     */
    public static void main(final String[] args) throws InterruptedException 
        // Telling the system where to find the chrome driver
        System.setProperty(
                "webdriver.chrome.driver",
                "C:/PATH/TO/chromedriver.exe");

        // Open the Chrome browser
        webDriver = new ChromeDriver();

        // Maximize the browser window
        webDriver.manage().window().maximize();

        if (testWordpresslogin()) 
            System.out.println("Test WordPress Login: Passed");
         else 
            System.out.println("Test WordPress Login: Failed");

        

        // Close the browser and WebDriver
        webDriver.close();
        webDriver.quit();
    }

    private static boolean testWordpresslogin() 
        try 
            // Open google.com
            webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/");

            // Type in the username
            webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME");

            // Type in the password
            webDriver.findElement(By.id("user_pass")).sendKeys("YOUR_PASSWORD");

            // Click the Submit button
            webDriver.findElement(By.id("wp-submit")).click();

            // Wait a little bit (7000 milliseconds)
            Thread.sleep(7000);

            // Check whether the h1 equals “Dashboard”
            if (webDriver.findElement(By.tagName("h1")).getText()
                    .equals("Dashboard")) 
                return true;
             else 
                return false;
            

        // If anything goes wrong, return false.
        } catch (final Exception e) 
            System.out.println(e.getClass().toString());
            return false;
        
    }
}

If you have done everything correctly, your output in the Eclipse console should look something like this:

Eclipse console test result.
The Eclipse console states that our first test has passed. (Large preview)

Understanding The Code

Because you are probably a web developer and have at least a basic understanding of other programming languages, I am sure you already grasp the basic idea of the code: We have created a separate method, testWordpressLogin, for the specific test case that is called from our main method.

Depending on whether the method returns true or false, you will get an output in your console telling you whether this specific test passed or failed.

This is not necessary, but this way you can easily add many more test cases to this class and still have readable code.

Now, step by step, here is what happens in our little program:

  1. First, we tell our program where it can find the specific WebDriver for Chrome.
    System.setProperty("webdriver.chrome.driver","C:/PATH/TO/chromedriver.exe");
  2. We open the Chrome browser and maximize the browser window.
    webDriver = new ChromeDriver();
    webDriver.manage().window().maximize();
  3. This is where we jump into our submethod and check whether it returns true or false.
    if (testWordpresslogin()) …
  4. The following part in our submethod might not be intuitive to understand:
    The try…catch… blocks. If everything goes as expected, only the code in try… will be executed, but if anything goes wrong while executing try…, then the execution continuous in catch{}. Whenever you try to locate an element with findElement and the browser is not able to locate this element, it will throw an exception and execute the code in catch…. In my example, the test will be marked as “failed” whenever something goes wrong and the catch{} is executed.
  5. In the submethod, we start by navigating to our WordPress admin area and locating the fields for the username and the password by looking for their IDs. Also, we type the given values in these fields.
    webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/");
    webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME");
    webDriver.findElement(By.id("user_pass")).sendKeys("YOUR_PASSWORD");

    Wordpress login form
    Selenium fills out our login form
  6. After filling in the login form, we locate the submit button by its ID and click it.
    webDriver.findElement(By.id("wp-submit")).click();
  7. In order to follow the test visually, I include a 7-second pause here (7000 milliseconds = 7 seconds).
    Thread.sleep(7000);
  8. If the login is successful, the h1 headline of the current page should now be “Dashboard,” referring to the WordPress admin area. Because the h1 headline should exist only once on every page, I have used the tag name here to locate the element. In most other cases, the tag name is not a good locator because an HTML tag name is rarely unique on a web page. After locating the h1, we extract the text of the element with getText() and check whether it is equal to the string “Dashboard.” If the login is not successful, we would not find “Dashboard” as the current h1. Therefore, I’ve decided to use the h1 to check whether the login is successful.
    if (webDriver.findElement(By.tagName("h1")).getText().equals("Dashboard")) 
        
            return true;
         else 
            return false;
        
    

    Wordpress Dashboard
    Letting the WebDriver check, whether we have arrived on the Dashboard: Test passed! (Large preview)
  9. If anything has gone wrong in the previous part of the submethod, the program would have jumped directly to the following part. The catch block will print the type of exception that happened to the console and afterwards return false to the main method.
    catch (final Exception e) 
                System.out.println(e.getClass().toString());
                return false;
            

Adapting The Test Case

This is where it gets interesting if you want to adapt and add test cases of your own. You can see that we always call methods of the webDriver object to do something with the Chrome browser.

First, we maximize the window:

webDriver.manage().window().maximize();

Then, in a separate method, we navigate to our WordPress admin area:

webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/");

There are other methods of the webDriver object we can use. Besides the two above, you will probably use this one a lot:

webDriver.findElement(By. …)

The findElement method helps us find different elements in the DOM. There are different options to find elements:

  • By.id
  • By.cssSelector
  • By.className
  • By.linkText
  • By.name
  • By.xpath

If possible, I recommend using By.id because the ID of an element should always be unique (unlike, for example, the className), and it is usually not affected if the structure of your DOM changes (unlike, say, the xPath).

Note: You can read more about the different options for locating elements with WebDriver over here.

As soon as you get ahold of an element using the findElement method, you can call the different available methods of the element. The most common ones are sendKeys, click and getText.

We’re using sendKeys to fill in the login form:

webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME");

We have used click to submit the login form by clicking on the submit button:

webDriver.findElement(By.id("wp-submit")).click();

And getText has been used to check what text is in the h1 after the submit button is clicked:

webDriver.findElement(By.tagName("h1")).getText()

Note: Be sure to check out all the available methods that you can use with an element.

Conclusion

Ever since I discovered the power of Selenium WebDriver, my life as a web developer has changed. I simply love it. The deeper I dive into the framework, the more possibilities I discover — running one test simultaneously in Chrome, Internet Explorer and Firefox or even on my smartphone, or taking screenshots automatically of different pages and comparing them. Today, I use Selenium WebDriver not only for testing purposes, but also to automate repetitive tasks on the web. Whenever I see an opportunity to automate my work on the web, I simply copy my initial WebDriver project and adapt it to the next task.

If you think that Selenium WebDriver is for you, I recommend looking at Selenium’s documentation to find out about all of the possibilities of Selenium (such as running tasks simultaneously on several (mobile) devices with Selenium Grid).

I look forward to hearing whether you find WebDriver as useful as I do!

Smashing Editorial
(rb, ra, al, il)


Original link: 

Automating Your Feature Testing With Selenium WebDriver

Why Is Google Analytics Inaccurate?

You may have noticed that some of your Google Analytics data isn’t entirely accurate. Whether you saw a sudden, unwarranted change in user behavior, picked up on major differences after a redesign, or found some unexplainable information within a report, there are many things that can indicate issues with your data. And that’s completely normal. Google Analytics is one of the most popular (if not the most popular) platforms for monitoring site performance. It can provide tons of valuable insight and is considered by many SEOs and site owners to be an indispensable tool. But it isn’t perfect. In fact,…

The post Why Is Google Analytics Inaccurate? appeared first on The Daily Egg.

Visit link – 

Why Is Google Analytics Inaccurate?

How To Prevent Common WordPress Theme Mistakes

If you’ve been thinking of creating free or premium WordPress themes, well, I hope I can help you avoid some of the mistakes I’ve made over the years. Even though I always strive for good clean code, there are pursuits that still somehow lead me into making mistakes. I hope that I can help you avoid them with the help of this article.
1. Don’t Gradually Reinvent The Wheel Be careful when making things look nice — especially if you create a function that does almost exactly the same thing as another function just to wrap things nicely.

More: 

How To Prevent Common WordPress Theme Mistakes

Learning Framer By Creating A Mobile App Prototype

The time of static user interfaces is long gone. Designing interactive prototypes is the best approach to expressing your ideas and explaining them to clients and stakeholders. Or, as Jerry Cao of UXPin puts it: “Nothing brings you closer to the functionality of the final product than prototyping. It is the prototype that brings to life the experience behind user experience.”
Prototyping is an important part of the modern UX design process.

Visit site: 

Learning Framer By Creating A Mobile App Prototype

Respect Always Comes First

The past years have been remarkable for web technologies. Our code has become modular, clean and well-defined. Our tooling for build processes and audits and testing and maintenance has never been so powerful. Our design process is systematic and efficient. Our interfaces are smooth and responsive, with a sprinkle of beautiful transitions and animations here and there. And after so many years, accessibility and performance have finally become established, well-recognized pillars of user experience.

Originally from: 

Respect Always Comes First