Are you experiencing a high bounce rate? If so, you’re not alone. Many digital marketers face this same problem, which usually occurs because the website’s not doing what it’s supposed to do. The fact is, a high bounce rate can indicate a problem, but not always. For bounce rates that are a little higher than […]
If you know why you want to use Beacon already, feel free to jump directly to the Getting Started section.
What Is The Beacon API For?
The Beacon API is used for sending small amounts of data to a server without waiting for a response. That last part is critical and is the key to why Beacon is so useful — our code never even gets to see a response, even if the server sends one. Beacons are specifically for sending data and then forgetting about it. We don’t expect a response and we don’t get a response.
Think of it like a postcard sent home when on vacation. You put a small amount of data on it (a bit of “Wish you were here” and “The weather’s been lovely”), put it in the mailbox, and you don’t expect a response. No one sends a return postcard saying “Yes, I do wish I was there actually, thank you very much!”
For modern websites and applications, there’s a number of use cases that fall very neatly into this pattern of send-and-forget.
Tracking Stats And Analytics Data
There’s no reason we couldn’t also cover the sort of mundane tasks often handled by Google Analytics, reporting on the user themselves and the capability of their device and browser. If the user has a logged in session, you could even tie those stats back to a known individual. Whatever data you gather, you can send it back to the server with Beacon.
Debugging And Logging
In fact, any logging task can usefully be performed using Beacon, be that creating save-points in a game, collecting information on feature use, or recording results from a multivariate test. If it’s something that happens in the browser that you want the server to know about, then Beacon is likely a contender.
Can’t We Already Do This?
I know what you’re thinking. None of this is new, is it? We’ve been able to communicate from the browser to the server using XMLHTTPRequest for more than a decade. More recently we also have the Fetch API which does much the same thing with a more modern promise-based interface. Given that, why do we need the Beacon API at all?
The key here is that because we don’t get a response, the browser can queue up the request and send it without blocking execution of any other code. As far as the browser is concerned, it doesn’t matter if our code is still running or not, or where the script execution has got to, as there’s nothing to return it can just background the sending of the HTTP request until it’s convenient to send it.
That might mean waiting until CPU load is lower, or until the network is free, or even just sending it right away if it can. The important thing is that the browser queues the beacon and returns control immediately. It does not hold things up while the beacon sends.
To understand why this is a big deal, we need to look at how and when these sorts of requests are issued from our code. Take our example of an analytics logging script. Our code may be timing how long the users spend on a page, so it becomes critical that the data is sent back to the server at the last possible moment. When the user goes to leave a page, we want to stop timing and send the data back home.
Typically, you’d use either the unload or beforeunload event to execute the logging. These are fired when the user does something like following a link on the page to navigate away. The trouble here is that code running on one of the unload events can block execution and delay the unloading of the page. If unloading of the page is delayed, then the loading next page is also delayed, and so the experience feels really sluggish.
Keep in mind how slow HTTP requests can be. If you’re thinking about performance, typically one of the main factors you try to cut down on is extra HTTP requests because going out to the network and getting a response can be super slow. The very last thing you want to do is put that slowness between the activation of a link and the start of the request for the next page.
Beacon gets around this by queuing the request without blocking, returning control immediately back to your script. The browser then takes care of sending that request in the background without blocking. This makes everything much faster, which makes users happier and lets us all keep our jobs.
So we understand what Beacon is, and why we might use it, so let’s get started with some code. The basics couldn’t be simpler:
let result = navigator.sendBeacon(url, data);
The result is boolean, true if the browser accepted and queued the request, and false if there was a problem in doing so.
navigator.sendBeacon takes two parameters. The first is the URL to make the request to. The request is performed as an HTTP POST, sending any data provided in the second parameter.
The data parameter can be in one of several formats, all if which are taken directly from the Fetch API. This can be a Blob, a BufferSource, FormData or URLSearchParams — basically any of the body types used when making a request with Fetch.
I like using FormData for basic key-value data as it’s uncomplicated and easy to read back.
// URL to send the data to
let url = '/api/my-endpoint';
// Create a new FormData and add a key/value pair
let data = new FormData();
let result = navigator.sendBeacon(url, data);
Support in browsers for Beacon is very good, with the only notable exceptions being Internet Explorer (works in Edge) and Opera Mini. For most uses, that should be fine, but it’s worth testing for support before trying to use navigator.sendBeacon.
That’s easy to do:
// Beacon code
// No Beacon. Maybe fall back to XHR?
If Beacon isn’t available and your request is important, you could fall back to a blocking method such as XHR. Depending on your audience and purpose, you might equally choose to not bother.
An Example: Logging Time On A Page
To see this in practice, let’s create a basic system to time how long a user stays on a page. When the page loads we’ll note the time, and when the user leaves the page we’ll send the start time and current time to the server.
As we only care about time spent (not the actual time of day) we can use performance.now() to get a basic timestamp as the page loads:
let startTime = performance.now();
If we wrap up our logging into a function, we can call it when the page unloads.
let logVisit = function()
// Test that we have support
if (!navigator.sendBeacon) return true;
// URL to send the data to, e.g.
let url = '/api/log-visit';
// Data to send
let data = new FormData();
// Let's go!
Finally, we need to call this function when the user leaves the page. My first instinct was to use the unload event, but Safari on a Mac seems to block the request with a security warning, so beforeunload works just fine for us here.
When the page unloads (or, just before it does) our logVisit() function will be called and provided the browser supports the Beacon API our beacon will be sent.
(Note that if there is no Beacon support, we return true and pretend it all worked great. Returning false would cancel the event and stop the page unloading. That would be unfortunate.)
Considerations When Tracking
As so many of the potential uses for Beacon revolve around tracking of activity, I think it would be remiss not to mention the social and legal responsibilities we have as developers when logging and tracking activity that could be tied back to users.
We may think of the recent European GDPR laws as they related to email, but of course, the legislation relates to storing any type of personal data. If you know who your users are and can identify their sessions, then you should check what activity you are logging and how it relates to your stated policies.
Often we don’t need to track as much data as our instincts as developers tell us we should. It can be better to deliberately not store information that would identify a user, and then you reduce your likelihood of getting things wrong.
DNT: Do Not Track
In addition to legal requirements, most browsers have a setting to enable the user to express a desire not to be tracked. Do Not Track sends an HTTP header with the request that looks like this:
If you’re logging data that can track a specific user and the user sends a positive DNT header, then it would be best to follow the user’s wishes and anonymize that data or not track it at all.
In PHP, for example, you can very easily test for this header like so:
// User does not wish to be tracked ...
The Beacon API is a really useful way to send data from a page back to the server, particularly in a logging context. Browser support is very broad, and it enables you to seamlessly log data without negatively impacting the user’s browsing experience and the performance of your site. The non-blocking nature of the requests means that the performance is much faster than alternatives such as XHR and Fetch.
If you’d like to read more about the Beacon API, the following sites are worth a look.
Fewer than 25 percent of businesses express satisfaction with their conversion rates. That’s pretty depressing. Conversion rate optimization (CRO) doesn’t improve your conversion rates overnight, but it sets you up for success. Part of CRO involves optimizing your calls to action for conversions. How is the best call to action for conversions? There’s no single call-to-action formula that can magically convince most of your leads to convert, but if you’re willing to get to know your audience, experience with different CTAs, and test variations, you’ll get closer to the conversion rates you want. We’ll be covering lots of information, so…
Building A Pub/Sub Service In-House Using Node.js And Redis
Today’s world operates in real time. Whether it’s trading stock or ordering food, consumers today expect immediate results. Likewise, we all expect to know things immediately — whether it’s in news or sports. Zero, in other words, is the new hero.
This applies to software developers as well — arguably some of the most impatient people! Before diving into BrowserStack’s story, it would be remiss of me not to provide some background about Pub/Sub. For those of you who are familiar with the basics, feel free to skip the next two paragraphs.
Many applications today rely on real-time data transfer. Let’s look closer at an example: social networks. The likes of Facebook and Twitter generate relevant feeds, and you (via their app) consume it and spy on your friends. They accomplish this with a messaging feature, wherein if a user generates data, it will be posted for others to consume in nothing short of a blink. Any significant delays and users will complain, usage will drop, and if it persists, churn out. The stakes are high, and so are user expectations. So how do services like WhatsApp, Facebook, TD Ameritrade, Wall Street Journal and GrubHub support high volumes of real-time data transfers?
All of them use a similar software architecture at a high level called a “Publish-Subscribe” model, commonly referred to as Pub/Sub.
“In software architecture, publish–subscribe is a messaging pattern where senders of messages, called publishers, do not program the messages to be sent directly to specific receivers, called subscribers, but instead categorize published messages into classes without knowledge of which subscribers, if any, there may be. Similarly, subscribers express interest in one or more classes and only receive messages that are of interest, without knowledge of which publishers, if any, there are.“
At BrowserStack, all of our products support (in one way or another) software with a substantial real-time dependency component — whether its automate tests logs, freshly baked browser screenshots, or 15fps mobile streaming.
In such cases, if a single message drops, a customer can lose information vital for preventing a bug. Therefore, we needed to scale for varied data size requirements. For example, with device logger services at a given point of time, there may be 50MB of data generated under a single message. Sizes like this could crash the browser. Not to mention that BrowserStack’s system would need to scale for additional products in the future.
As the size of data for each message differs from a few bytes to up to 100MB, we needed a scalable solution that could support a multitude of scenarios. In other words, we sought a sword that could cut all cakes. In this article, I will discuss the why, how, and results of building our Pub/Sub service in-house.
Through the lens of BrowserStack’s real-world problem, you will get a deeper understanding of the requirements and process of building your very own Pub/Sub.
Our Need For A Pub/Sub Service
BrowserStack has around 100M+ messages, each of which is somewhere between approximately 2 bytes and 100+ MB. These are passed around the world at any moment, all at different Internet speeds.
The largest generators of these messages, by message size, are our BrowserStack Automate products. Both have real-time dashboards displaying all requests and responses for each command of a user test. So, if someone runs a test with 100 requests where the average request-response size is 10 bytes, this transmits 1×100×10 = 1000 bytes.
Now let’s consider the larger picture as — of course — we don’t run just one test a day. More than approximately 850,000 BrowserStack Automate and App Automate tests are run with BrowserStack each and every day. And yes, we average around 235 request-response per test. Since users can take screenshots or ask for page sources in Selenium, our average request-response size is approximately 220 bytes.
So, going back to our calculator:
850,000×235×220 = 43,945,000,000 bytes (approx.) or only 43.945GB per day
Now let’s talk about BrowserStack Live and App Live. Surely we have Automate as our winner in form of size of data. However, Live products take the lead when it comes to the number of messages passed. For every live test, about 20 messages are passed each minute it turns. We run around 100,000 live tests, which each test averaging around 12 mins meaning:
100,000×12×20 = 24,000,000 messages per day
Now for the awesome and remarkable bit: We build, run, and maintain the application for this called pusher with 6 t1.micro instances of ec2. The cost of running the service? About $70 per month.
Choosing To Build vs. Buying
First things first: As a startup, like most others, we were always excited to build things in-house. But we still evaluated a few services out there. The primary requirements we had were:
Reliability and stability,
High performance, and
Let’s leave the cost-effectiveness criteria out, as I can’t think of any external services that cost under $70 a month (tweet me if know you one that does!). So our answer there is obvious.
In terms of reliability and stability, we found companies that provided Pub/Sub as a service with 99.9+ percent uptime SLA, but there were many T&C’s attached. The problem is not as simple as you think, especially when you consider the vast lands of the open Internet that lie between the system and client. Anyone familiar with Internet infrastructure knows stable connectivity is the biggest challenge. Additionally, the amount of data sent depends on traffic. For example, a data pipe that’s at zero for one minute may burst during the next. Services providing adequate reliability during such burst moments are rare (Google and Amazon).
Performance for our project means obtaining and sending data to all listening nodes at near zero latency. At BrowserStack, we utilize cloud services (AWS) along with co-location hosting. However, our publishers and/or subscribers could be placed anywhere. For example, it may involve an AWS application server generating much-needed log data, or terminals (machines where users can securely connect for testing). Coming back to the open Internet issue again, if we were to reduce our risk we would have to ensure our Pub/Sub leveraged the best host services and AWS.
Another essential requirement was the ability to transmit all types of data (Bytes, text, weird media data, etc.). With all considered, it did not make sense to rely on a third-party solution to support our products. In turn, we decided to revive our startup spirit, rolling up our sleeves to code our own solution.
Building Our Solution
Pub/Sub by design means there will be a publisher, generating and sending data, and a Subscriber accepting and processing it. This is similar to a radio: A radio channel broadcasts (publishes) content everywhere within a range. As a subscriber, you can decide whether to tune into that channel and listen (or turn off your radio altogether).
Unlike the radio analogy where data is free for all and anyone can decide to tune in, in our digital scenario we need authentication which means data generated by the publisher could only be for a single particular client or subscriber.
Above is a diagram providing an example of a good Pub/Sub with:
Here we have two publishers generating messages based on pre-defined logic. In our radio analogy, these are our radio jockeys creating the content.
There are two here, meaning there are two types of data. We can say these are our radio channels 1 and 2.
We have three that each read data on a particular topic. One thing to notice is that Subscriber 2 is reading from multiple topics. In our radio analogy, these are the people who are tuned into a radio channel.
Let’s start understanding the necessary requirements for the service.
An evented component
This kicks in only when there is something to kick in.
This keeps data persisted for a short duration so if the subscriber is slow, it still has a window to consume it.
Reducing the latency
Connecting two entities over a network with minimum hops and distance.
We picked a technology stack that fulfilled the above requirements:
Because why not? Evented, we wouldn’t need heavy data processing, plus it’s easy to onboard.
Supports perfectly short-lived data. It has all the capabilities to initiate, update and auto-expire. It also puts less load on the application.
Node.js For Business Logic Connectivity
Node.js is a nearly perfect language when it comes to writing code incorporating IO and events. Our particular given problem had both, making this option the most practical for our needs.
Surely other languages such as Java could be more optimized, or a language like Python offers scalability. However, the cost of starting with these languages is so high that a developer could finish writing code in Node in the same duration.
To be honest, if the service had a chance of adding more complicated features, we could have looked at other languages or a completed stack. But here it is a marriage made in heaven. Here is our package.json:
Very simply put, we believe in minimalism especially when it comes to writing code. On the other hand, we could have used libraries like Express to write extensible code for this project. However, our startup instincts decided to pass on this and to save it for the next project. Additional tools we used:
This is one of the most supported libraries for Redis connectivity with Node.js used by companies including Alibaba.
The best library for graceful connectivity and fallback with WebSocket and HTTP.
Redis For Transient Storage
Redis as a service scales is heavily reliable and configurable. Plus there are many reliable managed service providers for Redis, including AWS. Even if you don’t want to use a provider, Redis is easy to get started with.
Let’s break down the configurable part. We started off with the usual master-slave configuration, but Redis also comes with cluster or sentinel modes. Every mode has its own advantages.
If we could share the data in some way, a Redis cluster would be the best choice. But if we shared the data by any heuristics, we have less flexibility as the heuristic has to be followed across. Fewer rules, more control is good for life!
Redis Sentinel works best for us as data lookup is done in just one node, connecting at a given point in time while data is not sharded. This also means that even if multiple nodes are lost, the data is still distributed and present in other nodes. So you have more HA and less chances of loss. Of course, this removed the pros from having a cluster, but our use case is different.
Architecture At 30000 Feet
The diagram below provides a very high-level picture of how our Automate and App Automate dashboards work. Remember the real-time system that we had from the earlier section?
In our diagram, our main workflow is highlighted with thicker borders. The “automate” section consists of:
Comprised of the pristine versions of Windows, OSX, Android or iOS that you get while testing on BrowserStack.
The point of contact for all your Selenium and Appium tests with BrowserStack.
The “user service” section here is our gatekeeper, ensuring data is sent to and saved for the right individual. It is also our security keeper. The “pusher” section incorporates the heart of what we discussed in this article. It consists of the usual suspects including:
Our transient storage for messages, where in our case automate logs are temporarily stored.
This is basically the entity that obtains data from the hub. All your request responses are captured by this component which writes to Redis with session_id as the channel.
This reads data from Redis generated for the session_id. It is also the web server for clients to connect via WebSocket (or HTTP) to get data and then sends it to authenticated clients.
Finally, we have the user’s browser section, representing an authenticated WebSocket connection to ensure session_id logs are sent. This enables the front-end JS to parse and beautify it for users.
Similar to the logs service, we have pusher here that is being used for other product integrations. Instead of session_id, we use another form of ID to represent that channel. This all works out of pusher!
We’ve had considerable success in building out Pub/Sub. To sum up why we built it in-house:
Scales better for our needs;
Cheaper than outsourced services;
Full control over the overall architecture.
Events and Redis as a system keep things simple for developers, as you can obtain data from one source and push it to another via Redis. So we built it.
If the usage fits into your system, I recommend doing the same!
For a long time, responsive design dominated the web as the format of choice for business and personal sites. Now, however, mobile optimization has begun to gain credence as a potentially preferable strategy. Mobile optimization refers to optimizing a website specifically for mobile devices. Instead of simply compressing and slightly rearranging the content on the screen, you design the entire experience for smaller screens. You’ve probably heard the term “mobile-friendly.” It’s a bit outdated, so even though it sounds like a good thing, it’s not enough. People are using their mobile devices more and more, as I’ll explain in a…
Conversion rate optimization offers one of the fastest, most effective methodologies for turning your existing web traffic into paying customers. Also known as CRO, conversion rate optimization can involve numerous tools and strategies, but they’re all geared toward the same thing: Converting visitors into leads and leads into customers. There is a lot of conflicting and illuminating information out there about CRO. For instance, one study found that using long-form landing pages increased conversions by 220 percent. However, some companies find that short-form landing pages work better for their audiences. Similarly, about 75 percent of businesses who responded to another…
Monthly Web Development Update 5/2018: Browser Performance, Iteration Zero, And Web Authentication
As developers, we often talk about performance and request browsers to render things faster. But when they finally do, we demand even more performance.
Alex Russel from the Chrome team now shared some thoughts on developers abusing browser performance and explains why websites are still slow even though browsers reinvented themselves with incredibly fast rendering engines. This is in line with an article by Oliver Williams in which he states that we’re focusing on the wrong things, and instead of delivering the fastest solutions for slower machines and browsers, we’re serving even bigger bundles with polyfills and transpiled code to every browser.
It certainly isn’t easy to break out of this pattern and keep bundle size to a minimum in the interest of the user, but we have the technologies to achieve that. So let’s explore non-traditional ways and think about the actual user experience more often — before defining a project workflow instead of afterward.
Front-End Performance Checklist 2018
To help you cater for fast and smooth experiences, Vitaly Friedman summarized everything you need to know to optimize your site’s performance in one handy checklist. Read more →
Firefox 60 is out and brings ECMAScript Modules, as well as the Web Authentication API.
With Chrome 66 already released and the newest Firefox version coming up next, two major browsers are now distrusting all Symantec certificates that were issued before June 2016 — and trust me when I’m saying there are a lot of sites that still haven’t changed their affected certificates and, thus, will be out of reach for users now (Chrome) or very soon (Firefox).
The Windows 10 April update brought EdgeHTML 17 with mute tabs, autofill forms, a new “print website” mode to save resources, Service Workers and Push Notifications. Variable Fonts, Screen Capture in RTC via the Media Capture API, Subresource Integrity (SRI), and support for the Upgrade-Insecure-Requests header have also been added. Quite a step forward!
npm version 6 is here with some important security improvements. From now on, you not only have a new npm audit command to audit your depenencies for vulnerabilities, but npm will do this automatically and report back during dependency installs. The new version also comes with npm ci to make CI tasks faster and a couple of other improvements.
Node 10 is out with generators and async function support, full support for N-API and support for the Inspector protocol. It will become the next long-term support version in October.
Big news comes from Adobe regarding their Xd prototyping product: From now on, the software will be free for anyone with the new Starter Plan. The only differences to paid plans are limited storage, only one shared prototype (but as many non-shared as you want), and only the free Typekit library. The Xd team also improved the Sketch and Photoshop integrations, and you can now swap symbols, paste to multiple artboards, and protect design specs with a password, too.
The latest Firefox version comes with Web Authentication API support — a big step towards eliminating passwords. The API lets you log in via a hardware key like YubiKey if the browser and the web service both support the new technology. Notably, Chrome 67 beta is shipping the API as well already. Their team has written a technical implementation guide.
Starting from Firefox 60, we will be able to specify the same-site attribute for Cookies. This will allow a web application to advise the browser that cookies should only be sent if the request originates from the website the cookie came from. Read more details in the announcement blog post.
The GDPR Checklist is another helpful resource for people to check whether a website is compliant with the upcoming EU directive.
Postgres 10 has been here for quite a while already, but I personally struggled to find good information on how to use all these amazing features it brings along. Gabriel Enslein now shares Postgres 10 performance updates in a slide deck, shedding light on how to use the built-in JSON support, native partitioning for large datasets, hash index resiliency, and more.
Sam Thorogood shares how we can build a “native undo & redo for the web”, as used in many text editors, games, planning or graphical software and other occasions such as a drag and drop reordering. And while it’s not easy to build, the article explains the concepts and technical aspects to help us understand this complicated matter.
There’s a new way to implement element/container queries into your application: eqio is a tiny library using IntersectionObserver.
Work & Life
Johannes Seitz shares his thoughts about project management at the start of projects. He calls the method “Iteration Zero”. An interesting concept to understand the scope and risks of a project better at a time when you still don’t have enough experience with the project itself but need to build a roadmap to get things started.
Arestia Rosenberg shares why her number one advice for freelancers is to ‘lean into the moment’. It’s about doing work when you can and using your chance to do something else when you don’t feel you can work productively. In the end, the summary results in a happy life and more productivity. I’d personally extend this to all people who can do that, but, of course, it’s best applicable to freelancers indeed.
Ethan Marcotte elaborates on the ethical issues with Google Duplex that is designed to imitate human voice so well that people don’t notice if it’s a machine or a human being. While this sounds quite interesting from a technical point of view, it will push the debate about fake news much further and cause more struggle to differentiate between something a human said or a machine imitated.
I bet that most of you haven’t heard of Palantir yet. The company is funded by Peter Thiel and is a data-mining company that has the intention to collect as much data as possible about everybody in the world. It’s known to collaborate with various law enforcement authorities and even has connections to military services. What they do with data and which data they have from us isn’t known. My only hope right now is that this company will suffer a lot from the EU GDPR directive and that the European Union will try to stop their uncontrolled data collection. Facebook’s data practices are nothing compared to Palantir it seems.
Anton Sten shares his thoughts on Vanity Metrics, a common way to share numbers and statistics out of context. And since he realized what relevancy they have, he thinks differently about most of the commonly readable data such as investments or usage data of services now. Reading one number without having a context to compare it to doesn’t matter at all. We should keep that in mind.
We hope you enjoyed this Web Development Update. The next one is scheduled for Friday, June 15th. Stay tuned.
Managing SVG Interaction With The Pointer Events Property
Try clicking or tapping the SVG image below. If you put your pointer in the right place (the shaded path) then you should have Smashing Magazine’s homepage open in a new browser tab. If you tried to click on some white space, you might be really confused instead.
This is the dilemma I faced during a recent project that included links within SVG images. Sometimes when I clicked the image, the link worked. Other times it didn’t. Confusing, right?
I turned to the SVG specification to learn more about what might be happening and whether SVG offers a fix. The answer: pointer-events.
Not to be confused with DOM (Document Object Model) pointer events, pointer-events is both a CSS property and an SVG element attribute. With it, we can manage which parts of an SVG document or element can receive events from a pointing device such as a mouse, trackpad, or finger.
A note about terminology: “pointer events” is also the name of a device-agnostic, web platform feature for user input. However, in this article — and for the purposes of the pointer-events property — the phrase “pointer events” also includes mouse and touch events.
Outside Of The Box: SVG’s “Shape Model”
Using CSS with HTML imposes a box layout model on HTML. In the box layout model, every element generates a rectangle around its contents. That rectangle may be inline, inline-level, atomic inline-level, or block, but it’s still a rectangle with four right angles and four edges. When we add a link or an event listener to an element, the interactive area matches the dimensions of the rectangle.
Note: Adding a clip-path to an interactive element alters its interactive bounds. In other words, if you add a hexagonal clip-path path to an a element, only the points within the clipping path will be clickable. Similarly, adding a skew transformation will turn rectangles into rhomboids.
SVG does not have a box layout model. You see, when an SVG document is contained by an HTML document, within a CSS layout, the root SVG element adheres to the box layout model. Its child elements do not. As a result, most CSS layout-related properties don’t apply to SVG.
So instead, SVG has what I’ll call a ‘shape model’. When we add a link or an event listener to an SVG document or element, the interactive area will not necessarily be a rectangle. SVG elements do have a bounding box. The bounding box is defined as: the tightest fitting rectangle aligned with the axes of that element’s user coordinate system that entirely encloses it and its descendants. But initially, which parts of an SVG document are interactive depends on which parts are visible and/or painted.
Painted vs. Visible Elements
SVG elements can be “filled” but they can also be “stroked”. Fill refers to the interior of a shape. Stroke refers to its outline.
Together, “fill” and “stroke” are painting operations that render elements to the screen or page (also known as the canvas). When we talk about painted elements, we mean that the element has a fill and/or a stroke. Usually, this means the element is also visible.
However, an SVG element can be painted without being visible. This can happen if the visible attribute value or CSS property is hidden or when display is none. The element is there and occupies theoretical space. We just can’t see it (and assistive technology may not detect it).
Perhaps more confusingly, an element can also be visible — that is, have a computed visibility value of visible — without being painted. This happens when elements lack both a stroke and a fill.
Note: Color values with alpha transparency (e.g. rgba(0,0,0,0)) do not affect whether an element is painted or visible. In other words, if an element has an alpha transparent fill or stroke, it’s painted even if it can’t be seen.
Knowing when an element is painted, visible, or neither is crucial to understanding the impact of each pointer-events value.
All Or None Or Something In Between: The Values
pointer-events is both a CSS property and an SVG element attribute. Its initial value is auto, which means that only the painted and visible portions will receive pointer events. Most other values can be split into two groups:
Values that require an element to be visible; and
Values that do not.
painted, fill, stroke, and all fall into the latter category. Their visibility-dependent counterparts — visiblePainted, visibleFill, visibleStroke and visible — fall into the former.
The SVG 2.0 specification also defines a bounding-box value. When the value of pointer-events is bounding-box, the rectangular area around the element can also receive pointer events. As of this writing, only Chrome 65+ supports the bounding-box value.
none is also a valid value. It prevents the element and its children from receiving any pointer events. The pointer-events CSS property can be used with HTML elements too. But when used with HTML, only auto and none are valid values.
Since pointer-events values are better demonstrated than explained, let’s look at some demos.
Here we have a circle with a fill and a stroke applied. It’s both painted and visible. The entire circle can receive pointer events, but the area outside of the circle cannot.
Disable the fill, so that its value is none. Now if you try to hover, click, or tap the interior of the circle, nothing happens. But if you do the same for the stroke area, pointer events are still dispatched. Changing the fill value to none means that this area visible, but not painted.
Let’s make a small change to our markup. We’ll add pointer-events="visible" to our circle element, while keeping fill=none.
Now the unpainted area encircled by the stroke can receive pointer events.
Augmenting The Clickable Area Of An SVG Image
Let’s return to the image from the beginning of this article. Our “amethyst” is a path element, as opposed to a group of polygons each with a stroke and fill. That means we can’t just add pointer-events="all" and call it a day.
Instead, we need to augment the click area. Let’s use what we know about painted and visible elements. In the example below, I’ve added a rectangle to our image markup.
Even though this rectangle is unseen, it’s still technically visible (i.e. visibility: visible). Its lack of a fill, however, means that it is not painted. Our image looks the same. Indeed it still works the same — clicking white space still doesn’t trigger a navigation operation. We still need to add a pointer-events attribute to our a element. Using the visible or all values will work here.
Using bounding-box would eliminate the need for a phantom element. All points within the bounding box would receive pointer events, including the white space enclosed by the path. But again: pointer-events="bounding-box" isn’t widely supported. Until it is, we can use unpainted elements.
Using pointer-events When Mixing SVG And HTML
Another case where pointer-events may be helpful: using SVG inside of an HTML button.
In most browsers — Firefox and Internet Explorer 11 are exceptions here — the value of event.target will be an SVG element instead of our HTML button. Let’s add pointer-events="none" to our opening SVG tag.
Now when users click or tap our button, the event.target will refer to our button.
SVG supports the same kind of interactivity we’re used to with HTML. We can use it to create charts that respond to clicks or taps. We can create linked areas that don’t adhere to the CSS and HTML box model. And with the addition of pointer-events, we can improve the way our SVG documents behave in response to user interaction.
Browser support for SVG pointer-events is robust. Every browser that supports SVG supports the property for SVG documents and elements. When used with HTML elements, support is slightly less robust. It isn’t available in Internet Explorer 10 or its predecessors, or any version of Opera Mini.
We’ve just scratched the surface of pointer-events in this piece. For a more in-depth, technical treatment, read through the SVG Specification. MDN (Mozilla Developer Network) Web Docs offers more web developer-friendly documentation for pointer-events, complete with examples.
WordPress Local Development For Beginners: From Setup To Deployment
When first starting out with WordPress, it’s very common to make any changes directly on your live site. After all, where else would you do it? You only have that one site, so when something needs changing, you do it there.
However, this practice has several drawbacks. Most of all that it’s very public. So, when something goes seriously wrong, it’s quite noticeable for people on your site.
Preventing Common Mistakes
When creating free or premium WordPress themes, you’re bound to make mistakes. Find out how you can avoid them in order to save yourself time and focus on actually creating themes people will enjoy using. Read more →
It’s ok, don’t feel bad. Most WordPress beginners have done this at one point or another. However, in this article, we want to show you a better way: local WordPress development.
What that means is setting up a copy of your website on your local hard drive. Doing so is incredibly useful. So, below we will talk about the benefits of building a local WordPress development environment, how to set one up and how to move your local site to the web when it’s ready.
This is important, so stay tuned!
The Benefits Of Local WordPress Development
Before diving into the how, let’s have a look at the why. Using a local development version of WordPress offers many benefits.
We already mentioned that you no longer have to make changes to your live site with all the risks involved with that. However, there is more:
Test themes and plugins With a local copy of your site, you can try out as many themes and plugin combinations as you want without risking taking your live site out due to incompatibilities.
Update safely Another time when things are prone to go wrong are updates. With a local environment, you can update WordPress core and components to see if there are any problems before applying the updates to your live site.
Independent of an online connection With your WordPress site on your computer, you can work on it without being connected to the Internet. Thus, you can get work done even if there is no wifi.
High performance/low cost Because site performance is not limited by an online connection, local sites usually run much faster. This makes for a better workflow. Also, as you will see, you can set it all up with free software, eliminating the need for a paid staging area.
Sounds good? Then let’s see how to make it happen.
How To Set Up A Local Development Environment For WordPress
In this next part, we will show you how to set up your own local WordPress environment. First, we will go over what you need to do and then how to get it right.
Tools You’ll Need
In order to run, WordPress needs a server. That’s true for an online site as well as a local installation. So, we need to find a way to set one up on our computer.
That server also needs some software WordPress requires to work. Namely, that’s PHP (the platform’s main programming language) and MySQL for the database. Plus, it’s nice to have a MySQL user interface like phpMyAdmin to make handling the database more convenient.
Finally, it’s useful to have some developer tools to analyze and debug your site, for example, to look at HTML and CSS. The easiest way is to use Chrome or Firefox (read our article on Firefox’s DevTools), which have extensive functionality like that built in.
We have several options at our disposal to set up local server environments. Some of the most well known are DesktopServer, Vagrant, and Local by Flywheel. All of these contain the necessary components to set up a local server that WordPress can run on.
For this tutorial we will use XAMPP. The name is an acronym and stands for “cross platform, Apache, MySQL, PHP, Perl”. If you have been paying attention, you will notice that we earlier noted MySQL and PHP as essential to running a WordPress website. In addition, Apache is an open source solution for creating servers. So, the software contains everything we need in one neat package. Plus, as “cross platform” suggests, XAMPP is available for both Windows, Mac and Linux computers.
Installing XAMPP pretty much works like every other piece of software.
Run the installer (note that you might get a warning about running unknown software, allow to continue).
When asked which components to install, make sure that Apache, MySQL, PHP, and phpMyAdmin are active. The rest is usually unnecessary, deactivate it unless you have good reason not to.
Choose the location to install. Make sure it’s easy to reach as that’s where your sites will be saved and you will probably access them often.
You can disregard the information about Bitnami.
Choose to start the control panel right away at the end.
Open the .dmg file
Double click on the XAMPP icon or drag it to applications folder
That’s it, well done!
After the installation is complete, the control panel starts. Should your operating system ask for Firewall permissions, make sure to allow XAMPP for private networks, otherwise, it won’t work.
From the panel, you can start Apache and MySQL by clicking the Start buttons on the respective rows. If you run into problems with programs that use the same ports as XAMPP, quit those programs and try to restart the XAMPP processes. If the problem is with Skype, there is a permanent solution by disabling the ports under Tools → Options → Advanced → Connections.
Under Config, you can also enable automatic start for the components you need.
After that, it’s time to test your local server. For that, open your browser, and go to http://localhost.
If you see the following screen, everything works as it should. Well done!
Installing WordPress Locally
Now that you have a local server, you can install WordPress in the same way that you do on a web server. The only difference: everything is done on your hard drive, not an FTP server or inside a hosting provider’s admin panel.
Once that is done and you want to install WordPress, you can do so via the htdocs folder inside your installation of XAMPP. There, simply create a new directory, download the latest version of WordPress, unpack the files and copy them into the new folder. After that, you can start the installation by going to http://localhost/newdirectoryname.
When you are satisfied, you can then move the website from a local installation to live environment. That’s what we will talk about next.
How To Deploy Your Site With A Plugin
Alright, once your local site is up to your liking, it’s time to get it online. Just a quick note: if you want to get a copy of your existing live site to your hard drive, you can use the same process as described below only in reverse. The overall principles stay the same.
Either way, we can do this manually or via plugin and there are several solutions available. For this tutorial, we will use Duplicator. I found it to be one of the most convenient free solutions and, as you will see, it makes things really easy.
1. Install Duplicator
Like every other WordPress plugin, you first need to install Duplicator in order to use it. For that, simply go to Plugins → Add New. In the search box, input the plugin name and hit enter, it should be the first search result.
Click Install Now and activate once it’s done.
2. Export Site
When Duplicator is active on your site, it adds a new menu item in the WordPress dashboard. Clicking it gets you to this screen.
Here, you can create a so-called package. In Duplicator that means a zipped up version of your site, database, and an installation file. To get started, simply click on Create New.
In the next step, you can enter a name for your package. However, it’s not really necessary unless you have a specific reason.
Under Storage, Archive, and Installer are options to determine where to save the archive (works for the Pro version only), exclude files or database tables from the migration and input the updated database credentials and new URL.
In most cases you can just leave everything as is, especially because Duplicator will attempt to fill in the new credentials automatically later. So, for now just hit Next.
After that, Duplicator will run a test to see if there are any problems.
Unless something major pops up, just click on Build to begin building the package. This will start the backup process.
At the end of it, you get the option to download the zip file and installer with the click on the respective buttons or with the help of the One-Click Download.
Do so and you are done with this part of the process.
3. Upload And Deploy Files On Your Server
If all of this has gone down without a hitch, it’s now time to upload the generated files to their new home. For that, log into your FTP server and browse to your new site’s home directory. Then, start uploading the files we generated in the last step.
Once done, you can start the installation process by inputting http://yoursite.com/installer.php into the browser bar.
In the first step, the plugin checks the archive’s integrity and whether it can deploy the site in the current environment. You also get some advanced options that you are welcome to ignore at the moment. However, make sure to check the box where it says “I have read and accept all terms and notices,” before clicking Next.
Your site is now being unpacked. After that, you get to the screen where it’s time to input the database information.
The plugin can either create a new database (if your host allows it) or use an existing one. For the latter option, you need to set up the database manually beforehand. Either way, you need to input the database name, username, and password to continue. Duplicator will also use this information to update the wp-config.php file of your site so that it can talk to the new database. Click Test Database to see if the connection works. Hit Next to start the installation of the database.
Once Duplicator is done with that, the final step is to confirm the details of your old and new site.
That way, Duplicator is able to replace all mentions of your old URL in the database with the new one. If you don’t, your site won’t work properly. If everything is fine, click the button that says Next.
4. Finishing Up
Now, there are just a few more things to do before you are finished. The first one is to check the last page of the setup for any problems encountered in the deployment.
The second is to log into your new site (you can do so via the button). Doing so will land you in the Duplicator menu.
Here, be sure to click on Remove Installation Files Now! at the top. This is important for security reasons.
And that’s it, your site should now be successfully migrated. Well done! You have just mastered the basics of local WordPress development.
Quick Note: Updating Your Database Information Manually
Should the Duplicator plugin for some reason be unable to update wp-config.php with the new database information, your site won’t work and you will see a warning that says “unable to establish database connection”.
In that case, you need to change the information manually. Do this by finding wp-config.php in your WordPress installation’s main folder. You can access it via FTP or a hosting backend like cPanel. Ask your provider for help if you find yourself unable to locate it on your own.
Edit the file (this might mean downloading, editing and re-uploading it) and find the following lines:
/** The name of the database for WordPress */
/** MySQL database username */
/** MySQL database password */
/** MySQL hostname */
Update the information here with that of your new host (by replacing the info between the ‘ ‘), save the file and move it back to your site’s main directory. Now everything should be fine.
WordPress Local Development In A Nutshell
Learning how to install WordPress locally is super useful. It enables you to make site changes, run updates, test themes and plugins and more in a risk-free environment. In addition to that, it’s free thanks to open source software.
Above, you have learned how to build a local WordPress environment with XAMPP. We have led you through the installation process and explained how to use the local server with WordPress. We have also covered how to get your local site online once it’s ready to see the light of day.
Hopefully, your takeaway is that all of this is pretty easy. It might feel overwhelming as a beginner at first, however, using WordPress locally will become second nature quickly. Plus, the benefits clearly outweigh the learning process and will help you take your WordPress skills to the next level.
What are your thoughts on local WordPress development? Any comments, software or tips to share? Please do so in the comments section below!
Have you ever wondered what it would be like to travel the world while working as a designer? It might sound like a dream, but all that glitters is not gold, and it might not be the right choice for you. In this article, I’ll share some insights from my four years of travel and work that hopefully will be useful for anyone willing to try a nomadic lifestyle, too.
When I wrote the first draft of this article, I was in London, after a long trip to Asia. Now, I’m making changes to it in Mexico, before going to Argentina to visit my family. Changing countries often has become an important part of my life as a designer; and, curiously, it all happened by accident.
I once heard that, while at the office, you should stand up from your desk, stretch your legs, leave the building and never come back. That’s exactly what I did when I was living in Barcelona in March 2014. At the time, I was working for a big company on a project that I didn’t enjoy, and was participating in meetings with people I didn’t know. Sounds like fun, doesn’t it?
Looking back, I see that quitting a stable job to jump into the void of a traveling life was one of the scariest moments of my life. I was going to start a trip to South America, doing design workshops and lectures that would provide me with income to sustain my journey — but in reality, I had no idea what I was doing. Without knowing it, I was becoming a nomadic designer.
When that six-month trip ended and I went back to Barcelona, I was never able to settle again. So, what did I do? Of course, I kept traveling! Four years and 60 countries later, I’m still on it.
And I am not alone. I’d like to quote Vitaly Friedman (founder and long-time editor in chief of Smashing Magazine), who once said:
“There is nothing more liberating and sentimental than the sound of a closing door that you will never open again. As memories are flowing by, you turn your back from that door, you hold your breath for just a second and you walk away. At that point you almost sense the change in the wind flowing by. You let go; and knowing and feeling this change is an incredibly empowering, intoxicating experience.”
If this sounds appealing, and you would like to try it for yourself, I hope my article will help you get started. Please note, however, that I’m not going to give you a “how to” guide, with step-by-step instructions that will work for everyone. Instead, I’ll share some insights from my personal experience.
Whatever People Think It Is, It Is Not That
When I tell someone that I’m constantly traveling, usually the person looks amazed. Most of the time, their first question is, “How can you afford to live like that?” I sometimes joke, saying I’m a millionaire with plenty of free time, but it’s not only because of this that I suspect they get the wrong idea about “working and traveling.” Perhaps, some of them imagine plenty of beaches and a lot of comfort and freedom.
Well, guess what? Working from a beach is not that fun or convenient. Or, as Preston Lee puts it, “Freelancing is not sitting on the beach, sipping margaritas, and reading books by Tim Ferriss.”
But the part about having a lot of freedom is true. I do have a lot of freedom and flexibility — like any other freelancer — with the bonus that I get to know different places, people, cultures and cuisines. The perks of such a lifestyle outweigh the difficulties, but at the same time, not everyone would be willing to handle it.
Today, the “digital nomad” hype is all around, with a lot of people publishing e-books, videos and blog posts selling an ideal lifestyle that’s apparently perfect. Of course, some things are often left out — things like getting tired of packing again and again, being away from family and friends, and starting over often. That’s why people might have a unrealistic idea of what all this is about.
In case you are considering trying something similar, it might help to be aware of some personal characteristics that would be helpful to have.
Comfort With Being Alone
Being a solo traveler allows me to make my own decisions and be wherever I want, whenever I want. I spend a lot of time by myself (luckily, I get along well with myself). Even though it’s fine for me, I know it’s not for everybody. Loneliness is easy to solve, though. Coworking spaces, hostels, couchsurfing hangouts and Meetup are great places when you feel like you need some company and want to meet new friends.
Being Able To Adapt To New Contexts Easily
On a rough average, I move from one place to another every four days. This is very tiring, because every time I switch cities, I have to repeat the same process: book transportation and accommodation, pack my stuff, figure out how to get around, check the weather and find out about the cost of living. Ideally, you won’t be moving that often, but you should be able to grab your things quickly and move to a new place that better meets your expectations.
In some cases, you could plan to stay three or four months in the same spot, but the reality is that you will never know how you’ll feel until you get there. So, my advice is: Go little by little, especially at the beginning.
Flexible But Organized
I think I’ve never been as organized as I’ve been since I started traveling. Apart from the tools that I normally use to generate invoices, track payments and manage projects as a freelance designer, I also have several spreadsheets to organize my flights, accommodation and daily expenses.
The spreadsheet for expenses is divided into different categories and organized by day. I put in there all the money that goes out, so that I know exactly what’s happening, and how close I am to reaching my daily budget. I know there are some apps for this, but so far I haven’t felt comfortable with any of them. The key here is not to cheat (just write down everything) and to be consistent (do it every day) — this will give you good insight into how you are spending your money on the road. Then, it will be easier for you to make decisions based on the data gathered — for example, you could stop buying expensive cocktails so much.
Before Packing: Find A Remote-Friendly Job
If you’ve decided to give it a try, then, as a minimum, plan your start, and don’t leave home without at least one or two months of contracted work. This will keep you busy for the first part of your trip, so you’ll only have to worry about settling in a new place.
When I went to South America at the very beginning of my new life, I contacted several design organizations, schools and agencies that could help me organize design workshops and lectures in different countries. I did this because, a few months before leaving, I, along with a friend (¡hola José!) self-published a book that gained some popularity among designers in Latin America, and I was willing to teach its contents and show how to put them into practice. Doing things like that provided me with at least some idea of dates and an approximate income, while leaving other things to improvisation.
In my case, I’ve always liked being a freelancer, so I give myself a bit more flexibility in managing my schedule. One of the last projects I worked on was for a team in Barcelona and in which the words “remote” and “freelance” were clear in our agreement from the very beginning. This is what allowed me to keep going from country to country, splitting my time between work and getting to know new places and people.
Just so you know how “glamorous” this way of working is, at the beginning of this project I tried to hide my background during video calls or said that I couldn’t turn video on, because I was a bit embarrassed. By the end, though, my teammates didn’t care about that, and so later on, usually the first question of a video call was to ask me for a tour of the place I was staying at, to everyone’s amusement.
Of course, freelancing is not the only way. Working at a remote-friendly company could also be a good option for you. While you’ll lose some of the freedom to move around, if you plan to stay longer in places, then this could be the way to go: Hubspot, Basecamp, Bohemian Coding (of Sketch fame) and Automattic are some examples of companies that successfully work with designers and developers remotely, and there are many more.
Regardless of whether you are freelancing or working for someone else, I have to say that face to face contact with members of your team is necessary from time to time to time, especially when planning meetings at the beginning of a project, when things are not all that clear. This will partly determine the location you choose, so best to pick a place that allows you to conveniently go where your team is based when needed.
Choosing Your Next Destination
After finding work that allows you to be on the move, the next big thing is to decide where to base yourself. The good thing is that you can work from any place with a decent Internet connection and easy access to coffee (which is essential).
So far, I’ve been in 60 countries, more or less. Many of them were in Europe, where it is easy to move from one place to another and where you can sometimes cross countries without even noticing. (True story: I once missed a whole country because I fell asleep on the train.)
When choosing where to go next, keep in mind the following.
Reliable Access To Internet
You’ll never know how good the Internet connection is until you get there, but at least you can research which countries have the best and worst. This is important. Even if you’re tempted to spend a few days in the middle of a forest, if there is no access to Internet, it will probably be a no-go.
Choose a place where you’ll be in the same time zone with the rest of your team, or maybe a couple of hours ahead or behind. You’ll thank me when you have to schedule your first online meeting and it’s not 4:00 am.
I’ve said that I can work from almost anywhere, but I have to make sure that my next destination at least has some good coffee shops, libraries or coworking spaces. You could also work from your hostel, hotel or rented flat, but then you’d have to make sure it’s a proper environment to work in.
It’s always good to be surrounded by like-minded people, so many times before going somewhere, I’ll check whether there’s a designer, a design studio or a meetup where I could visit and get in touch with others. Luckily, some people are always willing to meet new pals, and they’ll give you good insight into how the local design community works. You can exchange and share tools of the trade, and you can make new friends as well.
In my case, sending some emails in advance enabled me to meet such talented people as Michael Flarup in Copenhagen and Sacha Greif in Osaka. Just don’t be afraid to ask if you can drop by for a short visit!
Besides that, other factors are at play when you’re deciding where to go: affordability, transportation, safety, weather, etc. Fortunately, Nomad List rounds up the “best” cities,, which you can use as a reference when choosing where to head to next.
Of course, sooner or later, you’ll make some mistake. I’ve learned the hard way that planning where to work is indeed important. I once had to take a few unintended days off when I took the Trans-Siberian train to go from St. Petersburg to Beijing via Mongolia. At some point, the temperatures in the middle of Siberia were so low — the worst day was -50º Celsius — that my phones’ batteries suddenly drained — and being lost in the middle of a snowy small town is anything but fun. (The Russians there thought I was crazy for traveling during winter, and I now know they were right. But if you are like me and go there during wintertime anyway, always keep a paper copy of directions and important information, and bring warm boots and a pocket-sized bottle of vodka.)
Packing And Gear
If you already have a job and know where you are heading to, then you are ready to pack! Someone once said that before starting a trip, you should take half the things you plan to carry and double the money you have budgeted. While the last thing is not always an option, packing light is doable! You’ll thank me when you find your accommodation is uphill.
I started traveling with a 50-liter backpack for my personal things, and a smaller one for work-related items that I carry separately. After some time, I switched the smaller one for a daypack that I can roll up and put into the big one when I don’t need it. Everything (including the laptop) weighs 10 kilograms or so, but I keep assessing what else can I take off in order to have fewer things to carry. Similarly to when working on a design, try to get rid of the unnecessary.
For my work, I chose a MacBook Air because I knew I was going to travel a lot, and I wanted something lightweight and compact. Even though it’s a bit old now, it’s been enough for my requirements so far. I also carry two phones (iPhone and Android, with their corresponding chargers) because I do a lot of UX design for mobile and I often need to test on different devices.
I do have more gadgets and stuff, of course. I carry a remote to control my slides (for conferences and workshops), all kinds of adaptors, and noise-cancelling headphones (very useful for when you travel by train or plane.) But, in general, it is a good idea not to over-buy in advance, and pick up items only when you need them.
Where To Work
I have worked in all kinds of places: buses, trains, coffee shops, airport lounges, libraries, shared rooms in hostels, private rooms in hotels — sometimes anywhere I can set my laptop horizontally… or kind of. I’ve even gotten used to not using a mouse at all, because there is not always a place to put it next to my laptop.
Chances are the room where you stay is also going to be where you will work, so choose carefully. Airbnb is now pretty much widespread, so renting your own room with a desk should be enough — but be sure to mark “laptop friendly” in the search filters.
I try to keep a low budget, so most of the time I stay in a hostel. After a bit of training, I’ve learned to identify places that look better for work — like ones with spacious common areas — just by looking at the pictures. Of course, “party hostels” are not an option if you want to get work done, so if you see a lot of people having fun on the property’s website, that’s a red flag.
One of the main problems is that you never know how good the Internet connection will be. So, if you have difficulty with the Wi-Fi at your hostel, go to a coffee shop, coworking space or even a public library. (Interestingly, public libraries are where I’m most productive, perhaps because of the silence and the fact that I cannot make calls or participate in online meetings.)
Tip: More than once, I’ve visited a coworking space and offered to give a free design lecture to members in exchange of a few days’ worth of using a desk. It’s also nice because afterwards people will know who you are and will approach you openly to share ideas! Feel free to try it yourself.
Finally, a couple of websites could be useful when you’re looking for a place to work — namely, Workfrom and Places to Work.
Things To Keep In Mind When Living On The Road
A few things are important for any traveling designer but are too complex to address completely in one article, so I’ll just briefly mention a few of them, in the hope that it will still be useful.
Please keep in mind that everybody’s situation is different. There’s no one-size-fits-all answer, so you might have to adapt the following to your case.
What is convenient for you will depend on your case, and it’s difficult to advise on what’s best, so be aware of the legislation of your home country and of the country where you plan to work. I’ve also been exploring the possibilities that Estonia is offering with its e-citizenship. I’m already an Estonian e-resident, but I haven’t done anything practical with this as of yet.
Managing Your Finances
Besides keeping spreadsheets for expenses, one thing that I discovered (perhaps too late) is that it is normally far more convenient to create an account in an app such as Monzo or Revolut (and there are plenty of others, like N26), so that, when making withdrawals, you pay fewer fees and get better exchange rates. This is especially useful in places where credit or debit cards are not normal; this way, you can also avoid the excessive fees that traditional banks often charge.
Also, check the exchange rate of your home currency before arriving at your next destination. In some cases, I’ve arrived in a new country and bought a very expensive coffee because I didn’t do my homework beforehand. One app I use to avoid situations like this is Currency, an elegant and simple currency converter.
Something useful I learned when applying for a visa that requires a tentative itinerary is to book hostels and planes that you can later on cancel for free, and print out and bring those booking confirmations.
Health And Insurance
Plenty of companies offer health insurance for travel, so I cannot recommend any one of them. The only thing I can advise is not to leave home without it. An accident in a foreign country, besides being annoying, could also be very expensive. So, prepare in advance.
What I normally do is buy a prepaid SIM card in each country I stay, with a data plan that will be enough for my stay there. I do a bit of research beforehand to see which carrier has the most widespread coverage, and I check if there’s any information in English on their website to make recharges when necessary.
A SIM card is more necessary in some countries than in others. For example, in Russia, most open Wi-Fi networks require you to fill in a phone number, to which a validation code will be sent, before you are able to use the network. Japan, on the other hand, is full of convenience stores with free internet (such as 7-Eleven), so a SIM card is not so necessary. Meanwhile, Europe now has a more convenient way to handling roaming, called “Roam like at home”, and getting a new SIM card in every country is not so necessary anymore.
In any case, before buying a SIM card, make sure it will work across the country, especially if the country is large with many different regions.
Another important thing to check beforehand: Will your current phone number work in the new country? Luckily, there’s a website where you can check that information.
Preparing To Be Offline
Moving from one place to another involves a lot of time spent on the road, and in some cases you won’t have the chance to find Wi-Fi or good data network coverage. At these times, Spotify and Netflix are my best friends, because both allow me to download music and TV shows, respectively, so I can use them without an Internet connection. This was especially useful when I had to take several connecting trains to cross Siberia, spending in some cases around 30 hours straight in a railroad car before my next stop.
For the same reason, I always download a map of my next destination, using Google Maps (Maps.me has the same functionality).
If you want more information of this kind, check out another article I wrote, with more travel hacks and tips from my years of traveling.
Before I board my next train, let me tell you that becoming a nomadic freelance designer is by far one of my best decisions of my life so far. I have the flexibility that any freelancer has, but I also meet new people and see new places all the time. And it’s not as expensive as you might think. If you control your budget, traveling could be cheaper than living in a big city. So, I can afford to work four to five hours every day, and I use the rest of the time to get to know the place I’m visiting, to work on personal projects and to write articles for Smashing Magazine.
Traveling also makes it a bit hard to separate pleasure from work, because everything seems more enjoyable when you’re moving around. I’ve had to remind myself sometimes that I’m not on a vacation and to focus on getting things done. After a while, though, I’ve found a good balance. Now, I allow myself some moments to just move around, travel and enjoy the ride, doing nothing. It’s not all about work, after all!
Living like this is sometimes challenging and tiring, but I find it much richer than being in an office Monday to Friday, 9:00 to 5:00. And with all of the current technology, it’s easy for any designer to embrace this lifestyle. The most difficult part is finding a job to sustain your travel, but once you’ve found it, the next part is just to let it flow.
The nomadic lifestyle is not right for everyone, but the only way to know for sure is to try. Neale Donald Walsch once said that life starts where your comfort zone ends, and I completely agree. If you can afford to take the risk, go for it and enjoy a part of life that might not last long but will give you a life-changing experience, teach you new things and change the way you see the world forever.
If, for some reason, it doesn’t work, you can always go back to your previous life, right? Whether you are ready to give it a try or still have some questions about it, feel free to let me know. I’ll do my best to help you out. See you on the road!