Tag Archives: node.js

Thumbnail

Keeping Node.js Fast: Tools, Techniques, And Tips For Making High-Performance Node.js Servers




Keeping Node.js Fast: Tools, Techniques, And Tips For Making High-Performance Node.js Servers

David Mark Clements



If you’ve been building anything with Node.js for long enough, then you’ve no doubt experienced the pain of unexpected speed issues. JavaScript is an evented, asynchronous language. That can make reasoning about performance tricky, as will become apparent. The surging popularity of Node.js has exposed the need for tooling, techniques and thinking suited to the constraints of server-side JavaScript.

When it comes to performance, what works in the browser doesn’t necessarily suit Node.js. So, how do we make sure a Node.js implementation is fast and fit for purpose? Let’s walk through a hands-on example.

Tools

Node is a very versatile platform, but one of the predominant applications is creating networked processes. We’re going to focus on profiling the most common of these: HTTP web servers.

We’ll need a tool that can blast a server with lots of requests while measuring the performance. For example, we can use AutoCannon:

npm install -g autocannon

Other good HTTP benchmarking tools include Apache Bench (ab) and wrk2, but AutoCannon is written in Node, provides similar (or sometimes greater) load pressure, and is very easy to install on Windows, Linux, and Mac OS X.

After we’ve established a baseline performance measurement, if we decide our process could be faster we’ll need some way to diagnose problems with the process. A great tool for diagnosing various performance issues is Node Clinic, which can also be installed with npm:

npm --install -g clinic

This actually installs a suite of tools. We’ll be using Clinic Doctor and Clinic Flame (a wrapper around 0x) as we go.

Note: For this hands-on example we’ll need Node 8.11.2 or higher.

The Code

Our example case is a simple REST server with a single resource: a large JSON payload exposed as a GET route at /seed/v1. The server is an app folder which consists of a package.json file (depending on restify 7.1.0), an index.js file and a util.js file.

The index.js file for our server looks like so:

'use strict'

const restify = require('restify')
const  etagger, timestamp, fetchContent  = require('./util')()
const server = restify.createServer()

server.use(etagger().bind(server))

server.get('/seed/v1', function (req, res, next) 
  fetchContent(req.url, (err, content) => 
    if (err) return next(err)
    res.send(data: content, url: req.url, ts: timestamp())
    next()
  })
})

server.listen(3000)

This server is representative of the common case of serving client-cached dynamic content. This is achieved with the etagger middleware, which calculates an ETag header for the latest state of the content.

The util.js file provides implementation pieces that would commonly be used in such a scenario, a function to fetch the relevant content from a backend, the etag middleware and a timestamp function that supplies timestamps on a minute-by-minute basis:

'use strict'

require('events').defaultMaxListeners = Infinity
const crypto = require('crypto')

module.exports = () => 
  const content = crypto.rng(5000).toString('hex')
  const ONE_MINUTE = 60000
  var last = Date.now()

  function timestamp () 
    var now = Date.now()
    if (now — last >= ONE_MINUTE) last = now
    return last
  
  
  function etagger () 
    var cache = 
    var afterEventAttached = false
    function attachAfterEvent (server) 
      if (attachAfterEvent === true) return
      afterEventAttached = true
      server.on('after', (req, res) => 
        if (res.statusCode !== 200) return
        if (!res._body) return
        const key = crypto.createHash('sha512')
          .update(req.url)
          .digest()
          .toString('hex')
        const etag = crypto.createHash('sha512')
          .update(JSON.stringify(res._body))
          .digest()
          .toString('hex')
        if (cache[key] !== etag) cache[key] = etag
      )
    }
    return function (req, res, next) 
      attachAfterEvent(this)
      const key = crypto.createHash('sha512')
        .update(req.url)
        .digest()
        .toString('hex')
      if (key in cache) res.set('Etag', cache[key])
      res.set('Cache-Control', 'public, max-age=120')
      next()
    
  }

  function fetchContent (url, cb) 
    setImmediate(() => 
      if (url !== '/seed/v1') cb(Object.assign(Error('Not Found'), statusCode: 404))
      else cb(null, content)
    })
  }

  return  timestamp, etagger, fetchContent 
  
}

By no means take this code as an example of best practices! There are multiple code smells in this file, but we’ll locate them as we measure and profile the application.

To get the full source for our starting point, the slow server can be found over here.

Profiling

In order to profile, we need two terminals, one for starting the application, and the other for load testing it.

In one terminal, within the app, folder we can run:

node index.js

In another terminal we can profile it like so:

autocannon -c100 localhost:3000/seed/v1

This will open 100 concurrent connections and bombard the server with requests for ten seconds.

The results should be something similar to the following (Running 10s test @ http://localhost:3000/seed/v1 — 100 connections):

Stat Avg Stdev Max
Latency (ms) 3086.81 1725.2 5554
Req/Sec 23.1 19.18 65
Bytes/Sec 237.98 kB 197.7 kB 688.13 kB

231 requests in 10s, 2.4 MB read

Results will vary depending on the machine. However, considering that a “Hello World” Node.js server is easily capable of thirty thousand requests per second on that machine that produced these results, 23 requests per second with an average latency exceeding 3 seconds is dismal.

Diagnosing

Discovering The Problem Area

We can diagnose the application with a single command, thanks to Clinic Doctor’s –on-port command. Within the app folder we run:

clinic doctor --on-port=’autocannon -c100 localhost:$PORT/seed/v1’ -- node index.js

This will create an HTML file that will automatically open in our browser when profiling is complete.

The results should look something like the following:


Clinic Doctor has detected an Event Loop issue


Clinic Doctor results

The Doctor is telling us that we have probably had an Event Loop issue.

Along with the message near the top of the UI, we can also see that the Event Loop chart is red, and shows a constantly increasing delay. Before we dig deeper into what this means, let’s first understand the effect the diagnosed issue is having on the other metrics.

We can see the CPU is consistently at or above 100% as the process works hard to process queued requests. Node’s JavaScript engine (V8) actually uses two CPU cores. One for the Event Loop and the other for Garbage Collection. When we see the CPU spiking up to 120% in some cases, the process is collecting objects related to handled requests.

We see this correlated in the Memory graph. The solid line in the Memory chart is the Heap Used metric. Any time there’s a spike in CPU we see a fall in the Heap Used line, showing that memory is being deallocated.

Active Handles are unaffected by the Event Loop delay. An active handle is an object that represents either I/O (such as a socket or file handle) or a timer (such as a setInterval). We instructed AutoCannon to open 100 connections (-c100). Active handles stay a consistent count of 103. The other three are handles for STDOUT, STDERR, and the handle for the server itself.

If we click the Recommendations panel at the bottom of the screen, we should see something like the following:


Clinic Doctor recommendations panel opened


Viewing issue specific recommendations

Short-Term Mitigation

Root cause analysis of serious performance issues can take time. In the case of a live deployed project, it’s worth adding overload protection to servers or services. The idea of overload protection is to monitor event loop delay (among other things), and respond with “503 Service Unavailable” if a threshold is passed. This allows a load balancer to fail over to other instances, or in the worst case means users will have to refresh. The overload-protection module can provide this with minimum overhead for Express, Koa, and Restify. The Hapi framework has a load configuration setting which provides the same protection.

Understanding The Problem Area

As the short explanation in Clinic Doctor explains, if the Event Loop is delayed to the level that we’re observing it’s very likely that one or more functions are “blocking” the Event Loop.

It’s especially important with Node.js to recognize this primary JavaScript characteristic: asynchronous events cannot occur until currently executing code has completed.

This is why a setTimeout cannot be precise.

For instance, try running the following in a browser’s DevTools or the Node REPL:

console.time('timeout')
setTimeout(console.timeEnd, 100, 'timeout')
let n = 1e7
while (n--) Math.random()

The resulting time measurement will never be 100ms. It will likely be in the range of 150ms to 250ms. The setTimeout scheduled an asynchronous operation (console.timeEnd), but the currently executing code has not yet complete; there are two more lines. The currently executing code is known as the current “tick.” For the tick to complete, Math.random has to be called ten million times. If this takes 100ms, then the total time before the timeout resolves will be 200ms (plus however long it takes the setTimeout function to actually queue the timeout beforehand, usually a couple of milliseconds).

In a server-side context, if an operation in the current tick is taking a long time to complete requests cannot be handled, and data fetching cannot occur because asynchronous code will not be executed until the current tick has completed. This means that computationally expensive code will slow down all interactions with the server. So it’s recommended to split out resource intense work into separate processes and call them from the main server, this will avoid cases where on rarely used but expensive route slows down the performance of other frequently used but inexpensive routes.

The example server has some code that is blocking the Event Loop, so the next step is to locate that code.

Analyzing

One way to quickly identify poorly performing code is to create and analyze a flame graph. A flame graph represents function calls as blocks sitting on top of each other — not over time but in aggregate. The reason it’s called a ‘flame graph’ is because it typically uses an orange to red color scheme, where the redder a block is the “hotter” a function is, meaning, the more it’s likely to be blocking the event loop. Capturing data for a flame graph is conducted through sampling the CPU — meaning that a snapshot of the function that is currently being executed and it’s stack is taken. The heat is determined by the percentage of time during profiling that a given function is at the top of the stack (e.g. the function currently being executed) for each sample. If it’s not the last function to ever be called within that stack, then it’s likely to be blocking the event loop.

Let’s use clinic flame to generate a flame graph of the example application:

clinic flame --on-port=’autocannon -c100 localhost:$PORT/seed/v1’ -- node index.js

The result should open in our browser with something like the following:


Clinic’s flame graph shows that server.on is the bottleneck


Clinic’s flame graph visualization

The width of a block represents how much time it spent on CPU overall. Three main stacks can be observed taking up the most time, all of them highlighting server.on as the hottest function. In truth, all three stacks are the same. They diverge because during profiling optimized and unoptimized functions are treated as separate call frames. Functions prefixed with a * are optimized by the JavaScript engine, and those prefixed with a ~ are unoptimized. If the optimized state isn’t important to us, we can simplify the graph further by pressing the Merge button. This should lead to view similar to the following:


Merged flame graph


Merging the flame graph

From the outset, we can infer that the offending code is in the util.js file of the application code.

The slow function is also an event handler: the functions leading up to the function are part of the core events module, and server.on is a fallback name for an anonymous function provided as an event handling function. We can also see that this code isn’t in the same tick as code that actually handles the request. If there were functions in the core, http, net, and stream would be in the stack.

Such core functions can be found by expanding other, much smaller, parts of the flame graph. For instance, try using the search input on the top right of the UI to search for send (the name of both restify and http internal methods). It should be on the right of the graph (functions are alphabetically sorted):


Flame graph has two small blocks highlighted which represent HTTP processing function


Searching the flame graph for HTTP processing functions

Notice how comparatively small all the actual HTTP handling blocks are.

We can click one of the blocks highlighted in cyan which will expand to show functions like writeHead and write in the http_outgoing.js file (part of Node core http library):


Flame graph has zoomed into a different view showing HTTP related stacks


Expanding the flame graph into HTTP relevant stacks

We can click all stacks to return to the main view.

The key point here is that even though the server.on function isn’t in the same tick as the actual request handling code, it’s still affecting the overall server performance by delaying the execution of otherwise performant code.

Debugging

We know from the flame graph that the problematic function is the event handler passed to server.on in the util.js file.

Let’s take a look:

server.on('after', (req, res) => 
  if (res.statusCode !== 200) return
  if (!res._body) return
  const key = crypto.createHash('sha512')
    .update(req.url)
    .digest()
    .toString('hex')
  const etag = crypto.createHash('sha512')
    .update(JSON.stringify(res._body))
    .digest()
    .toString('hex')
  if (cache[key] !== etag) cache[key] = etag
)

It’s well known that cryptography tends to be expensive, as does serialization (JSON.stringify) but why don’t they appear in the flame graph? These operations are in the captured samples, but they’re hidden behind the cpp filter. If we press the cpp button we should see something like the following:


Additional blocks related to C++ have been revealed in the flame graph (main view)


Revealing serialization and cryptography C++ frames

The internal V8 instructions relating to both serialization and cryptography are now shown as the hottest stacks and as taking up most of the time. The JSON.stringify method directly calls C++ code; this is why we don’t see a JavaScript function. In the cryptography case, functions like createHash and update are in the data, but they are either inlined (which means they disappear in the merged view) or too small to render.

Once we start to reason about the code in the etagger function it can quickly become apparent that it’s poorly designed. Why are we taking the server instance from the function context? There’s a lot of hashing going on, is all of that necessary? There’s also no If-None-Match header support in the implementation which would mitigate some of the load in some real-world scenarios because clients would only make a head request to determine freshness.

Let’s ignore all of these points for the moment and validate the finding that the actual work being performed in server.on is indeed the bottleneck. This can be achieved by setting the server.on code to an empty function and generating a new flamegraph.

Alter the etagger function to the following:

function etagger () 
  var cache = 
  var afterEventAttached = false
  function attachAfterEvent (server) 
    if (attachAfterEvent === true) return
    afterEventAttached = true
    server.on('after', (req, res) => )
  }
  return function (req, res, next) 
    attachAfterEvent(this)
    const key = crypto.createHash('sha512')
      .update(req.url)
      .digest()
      .toString('hex')
    if (key in cache) res.set('Etag', cache[key])
    res.set('Cache-Control', 'public, max-age=120')
    next()
  
}

The event listener function passed to server.on is now a no-op.

Let’s run clinic flame again:

clinic flame --on-port='autocannon -c100 localhost:$PORT/seed/v1' -- node index.js

This should produce a flame graph similar to the following:


Flame graph shows that Node.js event system stacks are still the bottleneck


Flame graph of the server when server.on is an empty function

This looks better, and we should have noticed an increase in request per second. But why is the event emitting code so hot? We would expect at this point for the HTTP processing code to take up the majority of CPU time, there’s nothing executing at all in the server.on event.

This type of bottleneck is caused by a function being executed more than it should be.

The following suspicious code at the top of util.js may be a clue:

require('events').defaultMaxListeners = Infinity

Let’s remove this line and start our process with the --trace-warnings flag:

node --trace-warnings index.js

If we profile with AutoCannon in another terminal, like so:

autocannon -c100 localhost:3000/seed/v1

Our process will output something similar to:

(node:96371) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 after listeners added. Use emitter.setMaxListeners() to increase limit
  at _addListener (events.js:280:19)
  at Server.addListener (events.js:297:10)
  at attachAfterEvent 
    (/Users/davidclements/z/nearForm/keeping-node-fast/slow/util.js:22:14)
  at Server.
    (/Users/davidclements/z/nearForm/keeping-node-fast/slow/util.js:25:7)
  at call
    (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/chain.js:164:9)
  at next
    (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/chain.js:120:9)
  at Chain.run
    (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/chain.js:123:5)
  at Server._runUse
    (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/server.js:976:19)
  at Server._runRoute
    (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/server.js:918:10)
  at Server._afterPre
    (/Users/davidclements/z/nearForm/keeping-node-fast/slow/node_modules/restify/lib/server.js:888:10)

Node is telling us that lots of events are being attached to the server object. This is strange because there’s a boolean that checks if the event has been attached and then returns early essentially making attachAfterEvent a no-op after the first event is attached.

Let’s take a look at the attachAfterEvent function:

var afterEventAttached = false
function attachAfterEvent (server) 
  if (attachAfterEvent === true) return
  afterEventAttached = true
  server.on('after', (req, res) => )
}

The conditional check is wrong! It checks whether attachAfterEvent is true instead of afterEventAttached. This means a new event is being attached to the server instance on every request, and then all prior attached events are being fired after each request. Whoops!

Optimizing

Now that we’ve discovered the problem areas, let’s see if we can make the server faster.

Low-Hanging Fruit

Let’s put the server.on listener code back (instead of an empty function) and use the correct boolean name in the conditional check. Our etagger function looks as follows:

function etagger () 
  var cache = 
  var afterEventAttached = false
  function attachAfterEvent (server) 
    if (afterEventAttached === true) return
    afterEventAttached = true
    server.on('after', (req, res) => 
      if (res.statusCode !== 200) return
      if (!res._body) return
      const key = crypto.createHash('sha512')
        .update(req.url)
        .digest()
        .toString('hex')
      const etag = crypto.createHash('sha512')
        .update(JSON.stringify(res._body))
        .digest()
        .toString('hex')
      if (cache[key] !== etag) cache[key] = etag
    )
  }
  return function (req, res, next) 
    attachAfterEvent(this)
    const key = crypto.createHash('sha512')
      .update(req.url)
      .digest()
      .toString('hex')
    if (key in cache) res.set('Etag', cache[key])
    res.set('Cache-Control', 'public, max-age=120')
    next()
  
}

Now we check our fix by profiling again. Start the server in one terminal:

node index.js

Then profile with AutoCannon:

autocannon -c100 localhost:3000/seed/v1

We should see results somewhere in the range of a 200 times improvement (Running 10s test @ http://localhost:3000/seed/v1 — 100 connections):

Stat Avg Stdev Max
Latency (ms) 19.47 4.29 103
Req/Sec 5011.11 506.2 5487
Bytes/Sec 51.8 MB 5.45 MB 58.72 MB

50k requests in 10s, 519.64 MB read

It’s important to balance potential server cost reductions with development costs. We need to define, in our own situational contexts, how far we need to go in optimizing a project. Otherwise, it can be all too easy to put 80% of the effort into 20% of the speed enhancements. Do the constraints of the project justify this?

In some scenarios, it could be appropriate to achieve a 200 times improvement with a low hanging fruit and call it a day. In others, we may want to make our implementation as fast as it can possibly be. It really depends on project priorities.

One way to control resource spend is to set a goal. For instance, 10 times improvement, or 4000 requests per second. Basing this on business needs makes the most sense. For instance, if server costs are 100% over budget, we can set a goal of 2x improvement.

Taking It Further

If we produce a new flame graph of our server, we should see something similar to the following:


Flame graph still shows server.on as the bottleneck, but a smaller bottleneck


Flame graph after the performance bug fix has been made

The event listener is still the bottleneck, it’s still taking up one-third of CPU time during profiling (the width is about one third the whole graph).

What additional gains can be made, and are the changes (along with their associated disruption) worth making?

With an optimized implementation, which is nonetheless slightly more constrained, the following performance characteristics can be achieved (Running 10s test @ http://localhost:3000/seed/v1 — 10 connections):

Stat Avg Stdev Max
Latency (ms) 0.64 0.86 17
Req/Sec 8330.91 757.63 8991
Bytes/Sec 84.17 MB 7.64 MB 92.27 MB

92k requests in 11s, 937.22 MB read

While a 1.6x improvement is significant, it arguable depends on the situation whether the effort, changes, and code disruption necessary to create this improvement are justified. Especially when compared to the 200x improvement on the original implementation with a single bug fix.

To achieve this improvement, the same iterative technique of profile, generate flamegraph, analyze, debug, and optimize was used to arrive at the final optimized server, the code for which can be found here.

The final changes to reach 8000 req/s were:

These changes are slightly more involved, a little more disruptive to the code base, and leave the etagger middleware a little less flexible because it puts the burden on the route to provide the Etag value. But it achieves an extra 3000 requests per second on the profiling machine.

Let’s take a look at a flame graph for these final improvements:


Flame graph shows that internal code related to the net module is now the bottleneck


Healthy flame graph after all performance improvements

The hottest part of the flame graph is part of Node core, in the net module. This is ideal.

Preventing Performance Problems

To round off, here are some suggestions on ways to prevent performance issues in before they are deployed.

Using performance tools as informal checkpoints during development can filter out performance bugs before they make it into production. Making AutoCannon and Clinic (or equivalents) part of everyday development tooling is recommended.

When buying into a framework, find out what it’s policy on performance is. If the framework does not prioritize performance, then it’s important to check whether that aligns with infrastructural practices and business goals. For instance, Restify has clearly (since the release of version 7) invested in enhancing the library’s performance. However, if low cost and high speed is an absolute priority, consider Fastify which has been measured as 17% faster by a Restify contributor.

Watch out for other widely impacting library choices — especially consider logging. As developers fix issues, they may decide to add additional log output to help debug related problems in the future. If an unperformant logger is used, this can strangle performance over time after the fashion of the boiling frog fable. The pino logger is the fastest newline delimited JSON logger available for Node.js.

Finally, always remember that the Event Loop is a shared resource. A Node.js server is ultimately constrained by the slowest logic in the hottest path.

Smashing Editorial
(rb, ra, il)


Continue at source:  

Keeping Node.js Fast: Tools, Techniques, And Tips For Making High-Performance Node.js Servers

Developing A Chatbot Using Microsoft’s Bot Framework, LUIS And Node.js (Part 1)

This tutorial gives you hands-on access to my journey of creating a digital assistant capable of connecting with any system via a RESTful API to perform various tasks.

Developing A Chatbot Using Microsoft Bot Framework, LUIS And Node.js (Part 1)

Here, I’ll be demonstrating how to save a user’s basic information and create a new project on their behalf via natural language processing (NLP).

The post Developing A Chatbot Using Microsoft’s Bot Framework, LUIS And Node.js (Part 1) appeared first on Smashing Magazine.

Source – 

Developing A Chatbot Using Microsoft’s Bot Framework, LUIS And Node.js (Part 1)

How to do server-side testing for single page app optimization

Reading Time: 5 minutes

Gettin’ technical.

We talk a lot about marketing strategy on this blog. But today, we are getting technical.

In this post, I team up with WiderFunnel front-end developer, Thomas Davis, to cover the basics of server-side testing from a web development perspective.

The alternative to server-side testing is client-side testing, which has arguably been the dominant testing method for many marketing teams, due to ease and speed.

But modern web applications are becoming more dynamic and technically complex. And testing within these applications is becoming more technically complex.

Server-side testing is a solution to this increased complexity. It also allows you to test much deeper. Rather than being limited to testing images or buttons on your website, you can test algorithms, architectures, and re-brands.

Simply put: If you want to test on an application, you should consider server-side testing.

Let’s dig in!

Note: Server-side testing is a tactic that is linked to single page applications (SPAs). Throughout this post, I will refer to web pages and web content within the context of a SPA. Applications such as Facebook, Airbnb, Slack, BBC, CodeAcademy, eBay, and Instagram are SPAs.


Defining server-side and client-side rendering

In web development terms, “server-side” refers to “occurring on the server side of a client-server system.”

The client refers to the browser, and client-side rendering occurs when:

  1. A user requests a web page,
  2. The server finds the page and sends it to the user’s browser,
  3. The page is rendered on the user’s browser, and any scripts run during or after the page is displayed.
Static app server
A basic representation of server-client communication.

The server is where the web page and other content live. With server-side rendering, the requested web page is sent to the user’s browser in final form:

  1. A user requests a web page,
  2. The server interprets the script in the page, and creates or changes the page content to suit the situation
  3. The page is sent to the user in final form and then cannot be changed using server-side scripting.

To talk about server-side rendering, we also have to talk a little bit about JavaScript. JavaScript is a scripting language that adds functionality to web pages, such as a drop-down menu or an image carousel.

Traditionally, JavaScript has been executed on the client side, within the user’s browser. However, with the emergence of Node.js, JavaScript can be run on the server side. All JavaScript executing on the server is running through Node.js.

*Node.js is an open-source, cross-platform JavaScript runtime environment, used to execute JavaScript code server-side. It uses the Chrome V8 JavaScript engine.

In laymen’s (ish) terms:

When you visit a SPA web application, the content you are seeing is either being rendered in your browser (client-side), or on the server (server-side).

If the content is rendered client-side, JavaScript builds the application HTML content within the browser, and requests any missing data from the server to fill in the blanks.

Basically, the page is incomplete upon arrival, and is completed within the browser.

If the content is being rendered server-side, your browser receives the application HTML, pre-built by the server. It doesn’t have to fill in any blanks.

Why do SPAs use server-side rendering?

There are benefits to both client-side rendering and server-side rendering, but render performance and page load time are two huge pro’s for the server side.

(A 1 second delay in page load time can result in a 7% reduction in conversions, according to Kissmetrics.)

Server-side rendering also enables search engine crawlers to find web content, improving SEO; and social crawlers (like the crawlers used by Facebook) do not evaluate JavaScript, making server-side rendering beneficial for social searching.

With client-side rendering, the user’s browser must download all of the application JavaScript, and wait for a response from the server with all of the application data. Then, it has to build the application, and finally, show the complete HTML content to the user.

All of which to say, with a complex application, client-side rendering can lead to sloooow initial load times. And, because client-side rendering relies on each individual user’s browser, the developer only has so much control over load time.

Which explains why some developers are choosing to render their SPAs on the server side.

But, server-side rendering can disrupt your testing efforts, if you are using a framework like Angular or React.js. (And the majority of SPAs use these frameworks).

The disruption occurs because the version of your application that exists on the server becomes out of sync with the changes being made by your test scripts on the browser.

NOTE: If your web application uses Angular, React, or a similar framework, you may have already run into client-side testing obstacles. For more on how to overcome these obstacles, and successfully test on AngularJS apps, read this blog post.


Testing on the server side vs. the client side

Client-side testing involves making changes (the variation) within the browser by injecting Javascript after the original page has already loaded.

The original page loads, the content is hidden, the necessary elements are changed in the background, and the ‘new’ version is shown to the user post-change. (Because the page is hidden while these changes are being made, the user is none-the-wiser.)

As I mentioned earlier, the advantages of client-side testing are ease and speed. With a client-side testing tool like VWO, a marketer can set up and execute a simple test using a WYSIWYG editor without involving a developer.

But for complex applications, client-side testing may not be the best option: Layering more JavaScript on top of an already-bulky application means even slower load time, and an even more cumbersome user experience.

A Quick Hack

There is a workaround if you are determined to do client-side testing on a SPA application. Web developers can take advantage of features like Optimizely’s conditional activation mode to make sure that testing scripts are only executed when the application reaches a desired state.

However, this can be difficult as developers will have to take many variables into account, like location changes performed by the $routeProvider, or triggering interaction based goals.

To avoid flicker, you may need to hide content until the front-end application has initialized in the browser, voiding the performance benefits of using server-side rendering in the first place.

WiderFunnel - client side testing activation mode
Activation Mode waits until the framework has loaded before executing your test.



When you do server-side testing, there are no modifications being made at the browser level. Rather, the parameters of the experiment variation (‘User 1 sees Variation A’) are determined at the server route level, and hooked straight into the javascript application through a service provider.

Here is an example where we are testing a pricing change:

“Ok, so, if I want to do server-side testing, do I have to involve my web development team?”

Yep.

But, this means that testing gets folded into your development team’s work flow. And, it means that it will be easier to integrate winning variations into your code base in the end.

If yours is a SPA, server-side testing may be the better choice, despite the work involved. Not only does server-side testing embed testing into your development workflow, it also broadens the scope of what you can actually test.

Rather than being limited to testing page elements, you can begin testing core components of your application’s usability like search algorithms and pricing changes.

A server-side test example!

For web developers who want to do server-side testing on a SPA, Tom has put together a basic example using Optimizely SDK. This example is an illustration, and is not functional.

In it, we are running a simple experiment that changes the color of a button. The example is built using Angular Universal and express JS. A global service provider is being used to fetch the user variation from the Optimizely SDK.

Here, we have simply hard-coded the user ID. However, Optimizely requires that each user have a unique ID. Therefore, you may want to use the user ID that already exists in your database, or store a cookie through express’ Cookie middleware.

Are you currently doing server-side testing?

Or, are you client-side testing on a SPA application? What challenges (if any) have you faced? How have you handled them? Do you have any specific questions? Let us know in the comments!

The post How to do server-side testing for single page app optimization appeared first on WiderFunnel Conversion Optimization.

Continue reading – 

How to do server-side testing for single page app optimization

Beyond The Browser: From Web Apps To Desktop Apps

I started out as a web developer, and that’s now one part of what I do as a full-stack developer, but never had I imagined I’d create things for the desktop. I love the web. I love how altruistic our community is, how it embraces open-source, testing and pushing the envelope.

Beyond The Browser: From Web Apps To Desktop Apps

I love discovering beautiful websites and powerful apps. When I was first tasked with creating a desktop app, I was apprehensive and intimidated. It seemed like it would be difficult, or at least… different.

The post Beyond The Browser: From Web Apps To Desktop Apps appeared first on Smashing Magazine.

See the original post:  

Beyond The Browser: From Web Apps To Desktop Apps

How To Develop An Interactive Command Line Application Using Node.js

Over the last five years, Node.js has helped to bring uniformity to software development. You can do anything in Node.js, whether it be front-end development, server-side scripting, cross-platform desktop applications, cross-platform mobile applications, Internet of Things, you name it. Writing command line tools has also become easier than ever before because of Node.js — not just any command line tools, but tools that are interactive, useful and less time-consuming to develop.

How To Develop An Interactive Command Line Application Using Node.js

If you are a front-end developer, then you must have heard of or worked on Gulp, Angular CLI, Cordova, Yeoman and others. Have you ever wondered how they work?

The post How To Develop An Interactive Command Line Application Using Node.js appeared first on Smashing Magazine.

This article: 

How To Develop An Interactive Command Line Application Using Node.js

Next Generation Server Compression With Brotli

Chances are pretty good that you’ve worked with, or at least understand the concept of, server compression. By compressing website assets on the server prior to transferring them to the browser, we’ve been able to achieve substantial performance gains.

Next Generation Server Compression With Brotli

For quite some time, the venerable gzip algorithm has been the go-to solution for reducing the size of page assets. A new kid on the block has been gaining support in modern browsers, and its name is Brotli. In this article, you’ll get hands-on with Brotli by writing a Node.js-powered HTTP server that implements this new algorithm, and we’ll compare its performance to gzip.

The post Next Generation Server Compression With Brotli appeared first on Smashing Magazine.

Credit: 

Next Generation Server Compression With Brotli

Optimizing Critical-Path Performance With Express Server And Handlebars

Recently, I’ve been working on an isomorphic React website. This website was developed using React, running on an Express server. Everything was going well, but I still wasn’t satisfied with a load-blocking CSS bundle. So, I started to think about options for how to implement the critical-path technique on an Express server.

This article contains my notes about installing and configuring a critical-path performance optimization using Express and Handlebars. Throughout this article, I’ll be using Node.js and Express. Familiarity with them will help you understand the examples.

The post Optimizing Critical-Path Performance With Express Server And Handlebars appeared first on Smashing Magazine.

Read More: 

Optimizing Critical-Path Performance With Express Server And Handlebars

Thumbnail

Sailing With Sails.js: An MVC-style Framework For Node.js


I had been doing server-side programming with Symfony 2 and PHP for at least three years before I started to see some productivity problems with it. Don’t get me wrong, I like Symfony quite a lot: It’s a mature, elegant and professional framework. But I’ve realized that too much of my precious time is spent not on the business logic of the application itself, but on supporting the architecture of the framework.

Sailing With Sails.js

I don’t think I’ll surprise anyone by saying that we live in a fast-paced world. The whole startup movement is a constant reminder to us that, in order to achieve success, we need to be able to test our ideas as quickly as possible. The faster we can iterate on our ideas, the faster we can reach customers with our solutions, and the better our chances of getting a product-market fit before our competitors do or before we exceed our limited budget. And in order to do so, we need instruments suitable to this type of work.

The post Sailing With Sails.js: An MVC-style Framework For Node.js appeared first on Smashing Magazine.

Continued here: 

Sailing With Sails.js: An MVC-style Framework For Node.js

Thumbnail

React To The Future With Isomorphic Apps

Things often come full circle in software engineering. The web in particular started with servers delivering content down to the client. Recently, with the creation of modern web frameworks such as AngularJS and Ember, we’ve seen a push to render on the client and only use a server for an API. We’re now seeing a possible return or, rather, more of a combination of both architectures happening.

What Is React?

React is, according to the official website1:

“A JavaScript library for building user interfaces.”

It is a way to create reusable front-end components. Plain and simple, that is the goal of React.

What Makes It Different?

React has quickly risen to immense popularity in the JavaScript community. There are a number of reasons for its success. One is that Facebook created it and uses it. This means that many developers at Facebook work with it, fixing bugs, suggesting features and so on.

React To The Future With Isomorphic Apps

Another reason for its quick popularity is that it’s different. It’s unlike AngularJS2, Backbone.js3, Ember4, Knockout5 and pretty much any of the other popular MV* JavaScript frameworks that have come out during the JavaScript revolution in the last few years. Most of these other frameworks operate on the idea of two-way binding to the DOM and updating it based on events. They also all require the DOM to be present; so, when you’re working with one of these frameworks and you want any of your markup to be rendered on the server, you have to use something like PhantomJS.

Virtual DOM

React is often described as the “V” in an MVC application. But it does the V quite differently than other MV* frameworks. It’s different from things like Handlebars, Underscore templates and AngularJS templates. React operates on the concept of a “virtual DOM.” It maintains this virtual DOM in memory, and any time a change is made to the DOM, React does a quick diff of the changes, batches them all into one update and hits the actual DOM all at once.

This has huge ramifications. First and foremost, performance-wise, you’re not constantly doing DOM updates, as with many of the other JavaScript frameworks. The DOM is a huge bottleneck with front-end performance. The second ramification is that React can render on the server just as easily as it can on the client.

React exposes a method called React.renderToString(). This method enables you to pass in a component, which in turn renders it and any child components it uses, and simply returns a string. You can then take that string of HTML and simply send it down to the client.

Example

These components are built with a syntax called JSX. At first, JSX looks like a weird HTML-JavaScript hybrid:

var HelloWorld = React.createClass(
    displayName: "HelloWorld",
    render() 
        return (
            <h1>Hello this.props.message</h1>
        );
    }
});

React.render(<HelloWorld message="world" />, document.body);

What you do with this .jsx format is pass it through (or “transpile”) webpack, grunt, gulp, or your “renderer” of choice and then spit out JavaScript that looks like this:

var HelloWorld = React.createClass(
  displayName: "HelloWorld",
  render: function() 
    return (
        React.createElement("h1", null, "Hello ", this.props.message)
    );
  
});

React.render(React.createElement(HelloWorld, message: "world"), document.body);

That’s what our HelloWorld.jsx component transpiles to — nothing more than simple JavaScript. Some would consider this a violation of the separation of concerns by mixing JavaScript with HTML. At first, this seems like exactly what we’re doing. However, after working with React for a while, you realize that the close proximity of your component’s markup to the JavaScript enables you to develop more quickly and to maintain it longer because you’re not jumping back and forth between HTML and JavaScript files. All the code for a given component lives in one place.

React.render attaches your <HelloWorld> component to the body. Naturally, that could be any element there. This causes the component’s render method to fire, and the result is added to the DOM inside the <body> tag.

With a React component, whatever you pass in as attributes — say, <HelloWorld message="world" /> — you have access to in the component’s this.props. So, in the <HelloWorld> component, this.props.message is world. Also, look a bit closer at the JSX part of the code:

return (
    <h1>Hello this.props.message</h1>
);

You’ll notice first that you have to wrap the HTML in parentheses. Secondly, this.props.message is wrapped in braces. The braces give you access to the component via this.

Each component also has access to its “state.” With React, each component manages its state with a few simple API methods, getState and setState, as well as getInitialState for when the component first loads. Whenever the state changes, the render method simply re-renders the component. For example:

var Search = React.createClass(
    getInitialState() 
        return 
            search: ""
        ;
    },
    render() 
        return (
            <div className="search-component">
                <input type="text" onChange=this.changeSearch />
                <span>You are searching for: this.state.search</span>
            </div>
        );
    },
    changeSearch(event) 
        var text = event.target.value;

        this.setState(
            search: text
        );
    }
});

React.render(<Search />, document.body);

In this example, the getInitialState function simply returns an object literal containing the initial state of the component.

The render function returns JSX for our elements — so, an input and a span, both wrapped in a div. Keep in mind that only one element can ever be returned in JSX as a parent. In other words, you can’t return <div></div><div></div>; you can only return one element with multiple children.

Notice the onChange=this.changeSearch. This tells the component to fire the changeSearch function when the change event fires on the input.

The changeSearch function receives the event fired from the DOM event and can grab the current text of the input. Then, we call setState and pass in the text. This causes render to fire again, and the this.state.search will reflect the new change.

Many other APIs in React are available to work with, but at a high level, what we did above is as easy as it gets for creating a simple React component.

Isomorphic JavaScript

With React, we can build “isomorphic” apps.

i·so·mor·phic: “corresponding or similar in form and relations”

This has already become a buzzword in 2015. Basically, it just means that we get to use the same code on the client and on the server.

This approach has many benefits.

Eliminate the FOUC

With AngularJS, Ember (for now) and SPA-type architecture, when a user first hits the page, all of the assets have to download. With SPA applications, this can take a second, and most users these days expect a loading time of less than two seconds. While content is loading, the page is unrendered. This is called the “flash of unstyled content” (FOUC). One benefit of an isomorphic approach to building applications is that you get the speed benefits of rendering on the server, and you can still render components after the page loads on the client.

The job of an isomorphic app is not to replace the traditional server API, but merely to help eliminate FOUC and to give users the better, faster experience that they are growing accustomed to.

Shared Code

One big benefit is being able to use the same code on the client and on the server. Simply create your components, and they will work in both places. In most systems, such as Rails6, ASP.NET MVC7, you will typically have erb or cshtml views for rendering on the server. You then have to have client-side templates, such as Handlebars or Hogan.js, which often duplicate logic. With React, the same components work in both places.

Progressive Enhancement

Server rendering allows you to send down the barebones HTML that a client needs to display a website. You can then enhance the experience or render more components in the client.

Delivering a nice experience to a user on a flip phone in Africa, as well as an enhanced experience to a user on a 15-inch MacBook Pro with Retina Display, hooked up to the new 4K monitor, is normally a rather tedious task.

React goes above and beyond just sharing components. When you render React components on the server and ship the HTML down to the client, React on the client side notices that the HTML already exists. It simply attaches event handlers to the existing elements, and you’re ready to go.

This means that you can ship down only the HTML needed to render the page; then, any additional things can be pulled in and rendered on the client as needed. You get the benefit of fast page loading by server rendering, and you can reuse the components.

Creating An Isomorphic Express App

Express138 is one of the most popular Node.js web servers. Getting up and running with rendering React with Express is very easy.

Adding React rendering to an Express app takes just a few steps. First, add node-jsx and react to your project with this:


npm install node-jsx --save
npm install react --save

Let’s create a basic app.jsx file in the public/javascripts/components directory, which requires our Search component from earlier:

var React = require("react"),
    Search = require("./search");

var App = React.createClass(
    render() 
        return (
            <Search />
        );
    
});

module.exports = App;

Here, we are requiring react and our Search.jsx component. In the App render method, we can simply use the component with <Search />.

Then, add the following to one of your routers where you’re planning on rendering with React:

require("node-jsx").install(
    harmony: true,
    extension: ".jsx"
);

All this does is allow us to actually use require to grab .jsx files. Otherwise, Node.js wouldn’t know how to parse them. The harmony option allows for ECMAScript 6-style components.

Next, require in your component and pass it to React.createFactory, which will return a function that you can call to invoke the component:

var React = require("react"),
    App = React.createFactory(require("../public/javascripts/components/app")),
    express = require("express"),
    router = express.Router();

Then, in a route, simply call React.renderToString and pass it your component:

router.get("/", function(req, res) 
    var markup = React.renderToString(
        App()
    );

    res.render("index", 
        markup: markup
    );
});

Finally, in your view, simply output the markup:


<body>
    <div id="content">
        markup}}
    </div>
</body>

That’s it for the server code. Let’s look at what’s necessary on the client side.

Webpack

Webpack9 is a JavaScript bundler. It bundles all of your static assets, including JavaScript, images, CSS and more, into a single file. It also enables you to process the files through different types of loaders. You could write your JavaScript with CommonJS or AMD modules syntax.

For React .jsx files, you’ll just need to configure your webpack.config file a bit in order to compile all of your jsx components.

Getting started with Webpack is easy:


npm install webpack -g # Install webpack globally
npm install jsx-loader --save # Install the jsx loader for webpack

Next, create a webpack.config.js file.

var path = require("path");

module.exports = [
    context: path.join(__dirname, "public", "javascripts"),
    entry: "app",
    output: 
        path: path.join(__dirname, "public", "javascripts"),
        filename: "bundle.js"
    ,
    module: 
        loaders: [
             test: /.jsx$/, loader: "jsx-loader?harmony"
        ]
    },
    resolve: 
        // You can now require('file') instead of require('file.coffee')
        extensions: ["", ".js", ".jsx"],
        root: [path.join(__dirname, "public", "javascripts")],
        modulesDirectories: ["node_modules"]
    
}];

Let’s break this down:

  • context
    This is the root of your JavaScript files.
  • entry
    This is the main file that will load your other files using CommonJS’ require syntax by default.
  • output
    This tells Webpack to output the code in a bundle, with a path of public/javascripts/bundle.js.

The module object is where you set up “loaders.” A loader simply enables you to test for a file extension and then pass that file through a loader. Many loaders exist for things like CSS, Sass, HTML, CoffeeScript and JSX. Here, we just have the one, jsx-loader?harmony. You can append options as a “query string” to the loader’s name. Here, ?harmony enables us to use ECMAScript 6 syntax in our modules. The test tells Webpack to pass any file with .jsx at the end to jsx-loader.

In resolve we see a few other options. First, extensions tells Webpack to omit the extensions of certain file types when we require files. This allows us just to do require("./file"), rather than require("./file.js"). We’re also going to set a root, which is simply the root of where our files will be required from. Finally, we’ll allow Webpack to pull modules from the node_modules directory with the modulesDirectories option. This enables us to install something like Handlebars with npm install handlebars and simply require("handlebars"), as you would in a Node.js app.

Client-Side Code

In public/javascripts/app.js, we’ll require in the same App component that we required in Express:

var React = require("react"),
    App = React.createFactory(require("components/app"));

if (typeof window !== "undefined") 
    window.onload = function() 
        React.render(App(), document.getElementById("content"));
    ;
}

We’re going to check that we’re in the browser with the typeof window !== "undefined". Then, we’ll attach to the onload event of the window, and we’ll call React.render and pass in our App(). The second argument we need here is a DOM element to mount to. This needs to be the same element in which we rendered the React markup on the server — in this case, the #content element.

The Search component in the example above was rendered on the server and shipped down to the client. The client-side React sees the rendered markup and attaches only the event handlers! This means we’ll get to see an initial page while the JavaScript loads.

All of the code above is available on GitHub10.

Conclusion

Web architecture definitely goes through cycles. We started out rendering everything on the server and shipping it down to the client. Then, JavaScript came along, and we started using it for simple page interactions. At some point, JavaScript grew up and we realized it could be used to build large applications that render all on the client and that use the server to retrieve data through an API.

In 2015, we’re starting to realize that we have these powerful servers, with tons of memory and CPU, and that they do a darn good job of rendering stuff for us. This isomorphic approach to building applications might just give us the best of both worlds: using JavaScript in both places, and delivering to the user a good experience by sending down something they can see quickly and then building on that with client-side JavaScript.

React is one of the first of what are sure to be many frameworks that enable this type of behavior. Ember’s developers are already working on isomorphic-style applications as well. Seeing how this all works out is definitely going to be fun!

Resources

(rb, al, il)

Footnotes

  1. 1 http://facebook.github.io/react/
  2. 2 https://angularjs.org/
  3. 3 http://backbonejs.org
  4. 4 http://emberjs.com/
  5. 5 http://knockoutjs.com
  6. 6 http://rubyonrails.org/
  7. 7 http://asp.net/mvc
  8. 8 http://expressjs.com/
  9. 9 http://webpack.github.io/
  10. 10 https://github.com/jcreamer898/expressiso
  11. 11 http://facebook.github.io/react/
  12. 12 https://egghead.io/technologies/react
  13. 13 http://expressjs.com/
  14. 14 http://react.rocks/tag/Isomorphic
  15. 15 http://webpack.github.io
  16. 16 https://github.com/petehunt/jsx-loader

The post React To The Future With Isomorphic Apps appeared first on Smashing Magazine.

Read this article:

React To The Future With Isomorphic Apps

Love Generating SVG With JavaScript? Move It To The Server!

I hope that by now, in 2014, there is no need to explain why SVG is a blessing to developers who want to ensure that their graphics look sharp on all devices, especially with their huge diversity of resolutions.
But just like any other technology, SVG has its limitations. And in this article, we’ll talk about how to bypass some of them.
Further Reading on SmashingMag: The Math Behind JavaScript Animations The Illusion Of Life: An SVG Animation Case Study Generating SVG With React A Few Different Ways To Use SVG Sprites In Animation What’s The Problem?

This article is from: 

Love Generating SVG With JavaScript? Move It To The Server!