The Indian Independence day is right around the corner. For consumers in India, it’s a day of rejoice and celebration. And, for marketers, it opens a box of opportunities.
For marketers, the opportunity to leverage spirit of Independence translates into consumers’ buying decision for marketers.
In India, especially during major festivals and occasions like Independence Day, you can expect cutthroat rivalry among major brands. And yet, there are big winners in such intense situations.
How does this happen?
What are the strategies and tactics that these brands deploy to successfully pull off a nationwide campaign?
We studied various campaigns of India’s largest online brands to find out the answer.
And we saw that there were five different ploys deployed to pique the interest of the average online consumer in India that resulted in the success of these campaigns.
1. Tapping into consumers’ emotions
Independence Day is the time of the year when citizens are filled with joy and hopes for prosperity for the whole nation. Marketers very well understand these emotions and know how to leverage these to their advantage.
A fitting example would be the outstation campaign by Ola, one of the largest cab aggregator in India.
When the Independence day is close to a weekend, people love to travel a lot. Weekend getaways are popular among the public, and folks love to spend time with their friends and relatives at places nearby.
Ola appealed to its customers’ emotions by offering them outstation deals during the Independence week. The company even offered an INR 300 discount for its first-time outstation users. Ola also partnered with Club Mahindra and Yatra to offer deals on hotel stays.
Ola encourages taking a holiday while thinking about it as a viable brand for traveling to nearby getaways.
2. Limited Period Offer
The sad part of these festive sales and offers is that these need to end after a short span. These campaigns generally run from 2 to 5 days around the festival.
For example, Flipkart Freedom Sale which celebrates India’s spirit of Independence only ran for 4 days, so people had limited time to buy what they wanted to.
Most consumers plan their purchases for such special occasions to get the best deals for the intended product. For others, marketing events, sales, and giveaways always take place with an expiration date.
Setting up such a trigger pushes prospective buyers to make purchases fast, to avoid missing out on the deals.
3. Creating a Sense of Urgency with the help of Micro Events
Some brands build upon the limited nature of the sale and go out all guns blazing to create a sense of urgency.
On top of the limited nature of the sale event, there are few micro-events incorporated into the sale that runs for a few hours to minutes. These sales are exclusive to people who can decide and act fast as they come with an additional discount.
Amazon does this very well with their lightning deals, which generally last from 2-6 hours throughout the event (which itself is 4-day long). The lightning deals have an additional discount on an already stated discount. The catch is the limited time and the sense of urgency it creates.
If people have to buy a product which has a lightning deal, they can add it to their cart and checkout under 15 minutes or the deal is gone forever.
4. Exclusive Product Launch
These festive events also leverage their audience’s interest by providing exclusive product offers during a sale.
It is highly useful to build anticipation among shoppers. And, in India, Amazon attracted consumers from the smartphone market. India is known as the mobile-first country, where over half the population owns a smartphone.
Amazon saw huge boosts in sales due to Smartphone and had exclusive launch of various devices such as Blackberry KeyOne, LG Q6, and the Oneplus 5’s Soft Gold variant. The result was a massive 10X increase in the sales for Amazon through just their Big Indian Sale Event.
5. Omnichannel Promotion and User Experience
Most major brands understand their users and customers. India is predominantly a mobile-first market with a decent penetration when it comes to computers. People love to shop using their mobile devices as well as use their laptops or PCs to make a purchase.
And most users want omnichannel access to the brand of their choice. We saw that a major chunk of brands embraced this philosophy over the Independence week.
For instance, my primary communication happens on my cell phone and brands saw my interaction on cell phones were far more than the email or website and therefore most of the promo I received was over mobile push or in-app rather than through email or website.
Also, there were deals that promoted usage of multiple channels to buy products. Grofers offered an INR 100 discount to shoppers who were open to buying stuff using their mobile app.
Appeal to Your Customers’ Emotions; Don’t Stop Experimenting
Customers are spoilt for choices when the whole nation is celebrating. In these times, marketers need not be intimidated or overwhelmed by their customers. They have to leverage these emotions and keeping building experiences with the help of experimentation.
These are major strategies that have been successfully demonstrated by brands to be effective. You need to understand emotional cues of your customers and accordingly create an effective campaign.
By tapping into your customer’s cognitive tendencies, you can build healthy, long-term relationships with your customers.
HTTPS is a must for every website nowadays: Users are looking for the padlock when providing their details; Chrome and Firefox explicitly mark websites that provide forms on pages without HTTPS as being non-secure; it is an SEO ranking factor; and it has a serious impact on privacy in general.
Additionally, there is now more than one option to get an HTTPS certificate for free, so switching to HTTPS is only a matter of will.
A few weeks ago, a Fortune 500 company asked that I review their A/B testing strategy.
The results were good, the hypotheses strong, everything seemed to be in order… until I looked at the log of changes in their testing tool.
I noticed several blunders: in some experiments, they had adjusted the traffic allocation for the variations mid-experiment; some variations had been paused for a few days, then resumed; and experiments were stopped as soon as statistical significance was reached.
When it comes to testing, too many companies worry about the “what”, or the design of their variations, and not enough worry about the “how”, the execution of their experiments.
Don’t get me wrong, variation design is important: you need solid hypotheses supported by strong evidence. However, if you believe your work is finished once you have come up with variations for an experiment and pressed the launch button, you’re wrong.
In fact, the way you run your A/B tests is the most difficult and most important piece of the optimization puzzle.
There are three kinds of lies: lies, damned lies, and statistics.
– Mark Twain
In this post, I will share the biggest mistakes you can make within each step of the testing process: the design, launch, and analysis of an experiment, and how to avoid them.
This post is fairly technical. Here’s how you should read it:
If you are just getting started with conversion optimization (CRO), or are not directly involved in designing or analyzing tests, feel free to skip the more technical sections and simply skim for insights.
If you are an expert in CRO or are involved in designing and analyzing tests, you will want to pay attention to the technical details. These sections are highlighted in blue.
Mistake #1: Your test has too many variations
The more variations, the more insights you’ll get, right?
Not exactly. Having too many variations slows down your tests but, more importantly, it can impact the integrity of your data in 2 ways.
First, the more variations you test against each other, the more traffic you will need, and the longer you’ll have to run your test to get results that you can trust. This is simple math.
But the issue with running a longer test is that you are more likely to be exposed to cookie deletion. If you run an A/B test for more than 3–4 weeks, the risk of sample pollution increases: in that time, people will have deleted their cookies and may enter a different variation than the one they were originally in.
Within 2 weeks, you can get a 10% dropout of people deleting cookies and that can really affect your sample quality.
The second risk when testing multiple variations is that the significance level goes down as the number of variations increases.
For example, if you use the accepted significance level of 0.05 and decide to test 20 different scenarios, one of those will be significant purely by chance (20 * 0.05). If you test 100 different scenarios, the number goes up to five (100 * 0.05).
In other words, the more variations, the higher the chance of a false positive i.e. the higher your chances of finding a winner that is not significant.
Google’s 41 shades of blue is a good example of this. In 2009, when Google could not decide which shades of blue would generate the most clicks on their search results page, they decided to test 41 shades. At a 95% confidence level, the chance of getting a false positive was 88%. If they had tested 10 shades, the chance of getting a false positive would have been 40%, 9% with 3 shades, and down to 5% with 2 shades.
You can calculate the chance of getting a false positive using the following formula: 1-(1-a)^m with m being the total number of variations tested and a being the significance level. With a significance level of 0.05, the equation would look like this:
1-(1-0.05)^m or 1-0.95^m.
You can fix the multiple comparison problem using the Bonferroni correction, which calculates the confidence level for an individual test when more than one variation or hypothesis is being tested.
Wikipedia illustrates the Bonferroni correction with the following example: “If an experimenter is testing m hypotheses, [and] the desired significance level for the whole family of tests is a, then the Bonferroni correction would test each individual hypothesis at a significance level of a/m.
For example, if [you are] testing m = 8 hypotheses with a desired a = 0.05, then the Bonferroni correction would test each individual hypothesis at a = 0.05/8=0.00625.”
In other words, you’ll need a 0.625% significance level, which is the same as a 99.375% confidence level (100% – 0.625%) for an individual test.
The Bonferroni correction tends to be a bit too conservative and is based on the assumption that all tests are independent of each other. However, it demonstrates how multiple comparisons can skew your data if you don’t adjust the significance level accordingly.
The following tables summarize the multiple comparison problem.
Probability of a false positive with a 0.05 significance level:
Adjusted significance and confidence levels to maintain a 5% false discovery probability:
In this section, I’m talking about the risks of testing a high number of variations in an experiment. But the same problem also applies when you test multiple goals and segments, which we’ll review a bit later.
Each additional variation and goal adds a new combination of individual statistics for online experiments comparisons to an experiment. In a scenario where there are four variations and four goals, that’s 16 potential outcomes that need to be controlled for separately.
Some A/B testing tools, such as VWO and Optimizely, adjust for the multiple comparison problem. These tools will make sure that the false positive rate of your experiment matches the false positive rate you think you are getting.
In other words, the false positive rate you set in your significance threshold will reflect the true chance of getting a false positive: you won’t need to correct and adjust the confidence level using the Bonferroni or any other methods.
One final problem with testing multiple variations can occur when you are analyzing the results of your test. You may be tempted to declare the variation with the highest lift the winner, even though there is no statistically significant difference between the winner and the runner up. This means that, even though one variation may be performing better in the current test, the runner up could “win” in the next round.
You should consider both variations as winners.
Mistake #2: You change experiment settings in the middle of a test
When you launch an experiment, you need to commit to it fully. Do not change the experiment settings, the test goals, the design of the variation or of the Control mid-experiment. And don’t change traffic allocations to variations.
Changing the traffic split between variations during an experiment will impact the integrity of your results because of a problem known as Simpson’s Paradox.This statistical paradox appears when we see a trend in different groups of data which disappears when those groups are combined.
Ronny Kohavi from Microsoft shares an example wherein a website gets one million daily visitors, on both Friday and Saturday. On Friday, 1% of the traffic is assigned to the treatment (i.e. the variation), and on Saturday that percentage is raised to 50%.
Even though the treatment has a higher conversion rate than the Control on both Friday (2.30% vs. 2.02%) and Saturday (1.2% vs. 1.00%), when the data is combined over the two days, the treatment seems to underperform (1.20% vs. 1.68%).
This is because we are dealing with weighted averages. The data from Saturday, a day with an overall worse conversion rate, impacted the treatment more than that from Friday.
We will return to Simpson’s Paradox in just a bit.
Changing the traffic allocation mid-test will also skew your results because it alters the sampling of your returning visitors.
Changes made to the traffic allocation only affect new users. Once visitors are bucketed into a variation, they will continue to see that variation for as long as the experiment is running.
So, let’s say you start a test by allocating 80% of your traffic to the Control and 20% to the variation. Then, after a few days you change it to a 50/50 split. All new users will be allocated accordingly from then on.
However, all the users that entered the experiment prior to the change will be bucketed into the same variation they entered previously. In our current example, this means that the returning visitors will still be assigned to the Control and you will now have a large proportion of returning visitors (who are more likely to convert) in the Control.
Note: This problem of changing traffic allocation mid-test only happens if you make a change at the variation level. You can change the traffic allocation at the experiment level mid-experiment. This is useful if you want to have a ramp up period where you target only 50% of your traffic for the first few days of a test before increasing it to 100%. This won’t impact the integrity of your results.
As I mentioned earlier, the “do not change mid-test rule” extends to your test goals and the designs of your variations. If you’re tracking multiple goals during an experiment, you may be tempted to change what the main goal should be mid-experiment. Don’t do it.
All Optimizers have a favorite variation that we secretly hope will win during any given test. This is not a problem until you start giving weight to the metrics that favor this variation. Decide on a goal metric that you can measure in the short term (the duration of a test) and that can predict your success in the long term. Track it and stick to it.
It is useful to track other key metrics to gain insights and/or debug an experiment, if something looks wrong. However, these are not the metrics you should look at to make a decision, even though they may favor your favorite variation.
Let’s say you have avoided the 2 mistakes I’ve already discussed, and you’re pretty confident about the results you see in your A/B testing tool. It’s time to analyze the results, right?
Not so fast! Did you stop the test as soon as it reached statistical significance?
I hope not…
Statistical significance should not dictate when you stop a test. It only tells you if there is a difference between your Control and your variations. This is why you should not wait for a test to be significant (because it may never happen) or stop a test as soon as it is significant. Instead, you need to wait for the calculated sample size to be reached before stopping a test. Use a test duration calculator to understand better when to stop a test.
Now, assuming you’ve stopped your test at the correct time, we can move on to segmentation. Segmentation and personalization are hot topics in marketing right now, and more and more tools enable segmentation and personalization.
There are 2 main problems with post-test segmentation, however, that will impact the statistical validity of your segments (when done incorrectly).
The sample size of your segments is too small. You stopped the test when you reached the calculated sample size, but at a segment level the sample size is likely too small and the lift between segments has no statistical validity.
The multiple comparison problem. The more segments you compare, the greater the likelihood that you’ll get a false positive among those tests. With a 95% confidence level, you’re likely to get a false positive every 20 post-test segments you look at.
There are different ways to prevent these two issues, but the easiest and most accurate strategy is to create targeted tests (rather than breaking down results per segment post-test).
I don’t advocate against post-test segmentation―quite the opposite. In fact, looking at too much aggregate data can be misleading. (Simpson’s Paradox strikes back.)
The Wikipedia definition for Simpson’s Paradox provides a real-life example from a medical study comparing the success rates of two treatments for kidney stones.
The table below shows the success rates and numbers of treatments for treatments involving both small and large kidney stones.
The paradoxical conclusion is that treatment A is more effective when used on small stones, and also when used on large stones, yet treatment B is more effective when considering both sizes at the same time.
In the context of an A/B test, this would look something like this:
Simpson’s Paradox surfaces when sampling is not uniform—that is the sample size of your segments is different. There are a few things you can do to prevent getting lost in and misled by this paradox.
First, you can prevent this problem from happening altogether by using stratified sampling, which is the process of dividing members of the population into homogeneous and mutually exclusive subgroups before sampling. However, most tools don’t offer this option.
If you are already in a situation where you have to decide whether to act on aggregate data or on segment data, Georgi Georgiev recommends you look at the story behind the numbers, rather than at the numbers themselves.
“My recommendation in the specific example [illustrated in the table above] is to refrain from making a decision with the data in the table. Instead, we should consider looking at each traffic source/landing page couple from a qualitative standpoint first. Based on the nature of each traffic source (one-time, seasonal, stable) we might reach a different final decision. For example, we may consider retaining both landing pages, but for different sources.
In order to do that in a data-driven manner, we should treat each source/page couple as a separate test variation and perform some additional testing until we reach the desired statistically significant result for each pair (currently we do not have significant results pair-wise).”
In a nutshell, it can be complicated to get post-test segmentation right, but when you do, it will unveil insights that your aggregate data can’t. Remember, you will have to validate the data for each segment in a separate follow up test.
The execution of an experiment is the most important part of a successful optimization strategy. If your tests are not executed properly, your results will be invalid and you will be relying on misleading data.
It is always tempting to showcase good results. Results are often the most important factor when your boss is evaluating the success of your conversion optimization department or agency.
But results aren’t always trustworthy. Too often, the numbers you see in case studies are lacking valid statistical inferences: either they rely on too heavily on an A/B testing tool’s unreliable stats engine and/or they haven’t addressed the common pitfalls outlined in this post.
Use case studies as a source of inspiration, but make sure that you are executing your tests properly by doing the following:
If your A/B testing tool doesn’t adjust for the multiple comparison problem, make sure to correct your significance level for tests with more than 1 variation
Don’t change your experiment settings mid-experiment
Don’t use statistical significance as an indicator of when to stop a test, and make sure to calculate the sample size you need to reach before calling a test complete
Finally, keep segmenting your data post-test. But make sure you are not falling into the multiple comparison trap and are comparing segments that are significant and have a big enough sample size
Support for responsive images was added to WordPress core in version 4.4 to address the use case for viewport-based image selection, where the browser requests the image size that best fits the layout for its particular viewport.
Images that are inserted within the text of a post automatically get the responsive treatment, while images that are handled by the theme or plugins — like featured images and image galleries — can be coded by developers using the new responsive image functions and filters. With a few additions, WordPress websites can accommodate another responsive image use case known as art direction. Art direction gives us the ability to design with images whose crop or composition changes at certain breakpoints.
You believe that a more customized user experience will lead to more orders, demo requests, phone calls etc. So, you have structures in place to deliver appropriate messages to your different audiences, each with distinct needs and expectations.
But I must ask, how are you segmenting your visitors?
You might be grouping them by device, by traffic source, by demographic data.
And these buckets are all viable:
Your desktop visitors may behave differently than your mobile visitors
Visitors coming from a Facebook ad may respond better to social proof triggers than those coming from organic search
Older visitors may browse your products differently than younger visitors
But the ultimate goal of segmentation, like conversion optimization, is to increase conversions. With that in mind, this post is all about that one segment you probably aren’t looking at: converters versus non-converters.
To clarify, your converter segment is not necessarily the same thing as your repeat-customer or Loyalty segment. Your converter segment includes anyone who converts, whether or not they’ve converted before.
Rather than focusing on different general visitor segments, you should turn your attention to the behaviors that differentiate visitors who convert from visitors who don’t.
When you focus on general visitor segments, you’re working from the top of the funnel to the bottom. Why not work from the bottom of the funnel, up? After all, that’s where the money is!
Correlation vs. Causation
First things first: when you’re looking at differences between converters and non-converters on your site, you must be wary of correlation versus causation.
It’s almost impossible to know whether converters are behaving in a distinct way because they’re already motivated to buy (correlation) or because the elements on the page have enabled those distinct behaviors (causation).
For example, does a converter browse more products than a non-converter because they’re already motivated to buy before arriving on-site? Or does an on-site UI that emphasizes browsability encourage converters to browse (and therefore convert)?
It’s similar to the search bar quandary: typically, visitors who search convert at a higher rate. But do they convert because they search (causation) or do the search because they’re already more motivated to buy (correlation)?
It’s a bit of a “the chicken or the egg” situation.
Fortunately, at WiderFunnel, we’re able to test on many retailers’ websites and take note of certain patterns. On multiple instances with different clients, we have observed clear and drastic differences in key user behavior metrics between visitors who convert and visitors who don’t convert.
These differences paint a picture of how your visitors shop. You can use this information to improve your UX and add features that’ll help your general visitors behave more like converters than non-converters. The hope is that encouraging non-converters to mimic the behavior of converters will lead to them actually becoming converters.
Moral of the story: If you observe impactful differences between converters and non-converters on your site, you should create a hypothesis that targets these differences.
WiderFunnel Optimization Strategist, Nick So, recently ran a test that did just that.
Let’s buy some shoes
One of our biggest clients is a global shoe retailer. Over the past 6 months, Nick noticed some patterns in their analytics:
A high percentage of visitors that convert (like 60%) are returning visitors
Converters visited 186% more pages per session on average and spent more time on page per session than non-converters
Meaning, the majority of converters on this site have already been to the site at least once before and they seem to spend much more time browsing than their non-converting counterparts.
It’s common sense that visitors who convert behave differently than those who don’t. But it wasn’t until we pulled the report and saw how big the difference was in their shopping behavior that we really thought to go down this path.
In previous testing, Nick had also observed that visitors to this site are responsive to features that increase the browsability of multiple products. He’d noticed the same sensitivity with some of our other retailer clients, where features that made it easier to compare products helped conversions.
We decided to run with this data. Our hypothesis was based on the idea that visitors who convert are most likely returning visitors, therefore, pointing them toward products they’ve already viewed will guide them back into the funnel.
The hypothesis: Increasing the browsability of the site by displaying recently viewed products to increase relevance for the visitor will encourage higher engagement and increased return visits, which will increase conversions.
Nick and the team tested a single variation against the Control homepage. The Control featured a “Recommended Products” section just below the hero section, displaying four of the client’s most popular product categories.
In our variation, we replaced this with a “Your Recently Viewed Products” section. We wanted to target those visitors who were returning to the site, presumably to continue in the purchasing process. The products displayed in this section were unique to each returning visitor.
Our variation won, consistently outperforming the Control during this test. This client saw a 6.9% increase in order completions.
Bottom to top
When you’re segmenting your audience, don’t forget about the segment that floats at the bottom of the funnel. Instead of identifying the differences that characterize visitors coming to your site, why not work backwards?
Look at the behavioral differences that distinguish converters from non-converters and test ways to help non-converters mimic the behaviors of converters.
Have you noticed drastic behavioral differences between your visitors who convert and those who don’t convert? Do you tap into this particular segment when you plan tests? Tell us all about it in the comments!
CSS can be used to style and animate scalable vector graphics, much like it is used to style and animate HTML elements. In this article, which is a modified transcript of a talk I recently gave1 at CSSconf EU2 and From the Front3, I’ll go over the prerequisites and techniques for working with CSS in SVG.
I’ll also go over how to export and optimize SVGs, techniques for embedding them and how each one affects the styles and animations applied, and then we’ll actually style and animate with CSS.
Scalable vector graphics (SVG) is an XML-based vector image format for two-dimensional graphics, with support for interactivity and animation. In other words, SVGs are XML tags that render shapes and graphics, and these shapes and graphics can be interacted with and animated much like HTML elements can be.
There are many reasons why SVGs are great and why you should be using them today:
SVG graphics are scalable and resolution-independent. They look great everywhere, from high-resolution “Retina” screens to printed media.
SVGs have very good browser support4. Fallbacks for non-supporting browsers are easy to implement, too, as we’ll see later in the article.
Because SVGs are basically text, they can be Gzip’d, making the files smaller that their bitmap counterparts (JPEG and PNG).
SVG comes with built-in graphics effects such as clipping and masking operations, background blend modes, and filters. This is basically the equivalent of having Photoshop photo-editing capabilities right in the browser.
SVGs are accessible. In one sense, they have a very accessible DOM API, which makes them a perfect tool for infographics and data visualizations and which gives them an advantage over HTML5 Canvas because the content of the latter is not accessible. In another sense, you can inspect each and every element in an SVG using your favorite browser’s developer tools, just like you can inspect HTML elements. And SVGs are accessible to screen readers if you make them so. We’ll go over accessibility a little more in the last section of this article.
Several tools are available for creating, editing and optimizing SVGs. And other tools make it easier to work with SVGs and save a lot of time in our workflows. We’ll go over some of these tools next.
Exporting SVGs From Graphics Editors And Optimizing Them
The three most popular vector graphics editors are:
Choose any editor to create your SVGs. After choosing your favorite editor and creating an SVG but before embedding it on a web page, you need to export it from the editor and clean it up to make it ready to work with.
I’ll refer to exporting and optimizing an SVG created in Illustrator. But the workflow applies to pretty much any editor, except for the Illustrator-specific options we’ll go over next.
To export an SVG from Illustrator, start by going to “File” → “Save as,” and then choose “.svg” from the file extensions dropdown menu. Once you’ve chosen the .svg extension, a panel will appear containing a set of options for exporting the SVG, such as which version of SVG to use, whether to embed images in the graphic or save them externally and link to them in the SVG, and how to add the styles to the SVG (by using presentation attributes or by using CSS properties in a <style> element).
The following image shows the best settings to choose when exporting an SVG for the web:
Whichever graphics editor you choose, it will not output perfectly clean and optimized code. SVG files, especially ones exported from editors, usually contain a lot of redundant information, such as meta data from the editor, comments, empty groups, default values, non-optimal values and other stuff that can be safely removed or converted without affecting the rendering of the SVG. And if you’re using an SVG that you didn’t create yourself, then the code is almost certainly not optimal, so using a standalone optimization tool is advisable.
Several tools for optimizing SVG code are out there. Peter Collingridge’s SVG Editor14 is an online tool that you input SVG code into either directly or by uploading an SVG file and that then provides you with several optimization options, like removing redundant code, comments, empty groups, white space and more. One option allows you to specify the number of decimal places of point coordinates.
Peter’s optimizer can also automatically move inline SVG properties to a style block at the top of the document. The nice thing about it is that, when you check an option, you can see the result of the optimization live, which enables you to better decide which optimizations to make. Certain optimizations could end up breaking your SVG. For example, one decimal place should normally be enough. If you’re working with a path-heavy SVG file, reducing the number of decimal places from four to one could slash your file’s size by as much as half. However, it could also entirely break the SVG. So, being able to preview an optimization is a big plus.
Peter’s tool is an online one. If you’d prefer an offline tool, try SVGO17 (the “O” is for “optimizer”), a Node.js-based tool that comes with a nice and simple drag-and-drop GUI18. If you don’t want to use an online tool, this one is a nice alternative.
The following screenshot (showing the path from the image above) is a simple before-and-after illustration of how much Peter’s tool optimizes SVG.
Notice the size of the original SVG compared to the optimized version. Not to mention, the optimized version is much more readable.
After optimizing the SVG, it’s ready to be embedded on a web page and further customized or animated with CSS.
Styling SVGs With CSS
The line between HTML and CSS is clear: HTML is about content and structure, and CSS is about the look. SVG blurs this line, to say the least. SVG 1.1 did not require CSS to style SVG nodes — styles were applied to SVG elements using attributes known as “presentation attributes.”
Presentation attributes are a shorthand for setting a CSS property on an element. Think of them as special style properties. They even contribute to the style cascade, but we’ll get to that shortly.
The following example shows an SVG snippet that uses presentation attributes to style the “border” (stroke) and “background color” (fill) of a star-shaped polygon:
The fill, stroke and stroke-width attributes are presentation attributes.
In SVG, a subset of all CSS properties may be set by SVG attributes, and vice versa. The SVG specification lists the SVG attributes that may be set as CSS properties20. Some of these attributes are shared with CSS, such as opacity and transform, among others, while some are not, such as fill, stroke and stroke-width, among others.
In SVG 2, this list will include x, y, width, height, cx, cy and a few other presentation attributes that were not possible to set via CSS in SVG 1.1. The new list of attributes can be found in the SVG 2 specification21.
Another way to set the styles of an SVG element is to use CSS properties. Just like in HTML, styles may be set on an element using inline style attributes:
We mentioned earlier that presentation attributes are sort of special style properties and that they are just shorthand for setting a CSS property on an SVG node. For this reason, it only makes sense that SVG presentation attributes would contribute to the style cascade.
Indeed, presentation attributes count as low-level “author style sheets” and are overridden by any other style definitions: external style sheets, document style sheets and inline styles.
The following diagram shows the order of styles in the cascade. Styles lower in the diagram override those above them. As you can see, presentation attribute styles are overridden by all other styles except for those specific to the user agent.
For example, in the following code snippet, an SVG circle element has been drawn. The fill color of the circle will be deep pink, which overrides the blue fill specified in the presentation attribute.
Most CSS selectors can be used to select SVG elements. In addition to the general type, class and ID selectors, SVGs can be styled using CSS2’s dynamic pseudo-classes23 (:hover, :active and :focus) and pseudo-classes24 (:first-child, :visited, :link and :lang. The remaining CSS2 pseudo-classes, including those having to do with generated content25 (such as ::before and ::after), are not part of the SVG language definition and, hence, have no effect on the style of SVGs.
The following is a simple animation of the fill color of a circle from deep pink to green when it is hovered over using the tag selector and the :hover pseudo-class:
transition: fill .3s ease-out;
Much more impressive effects can be created. A simple yet very nice effect comes from the Iconic26 icons set, in which a light bulb is lit up when hovered over. A demo of the effect27 is available.
Because presentation attributes are expressed as XML attributes, they are case-sensitive. For example, when specifying the fill color of an element, the attribute must be written as fill="…" and not fill="…".
Furthermore, keyword values for these attributes, such as the italic in font-style="italic", are also case-sensitive and must be specified using the exact case defined in the specification that defines that value.
All other styles specified as CSS properties — whether in a style attribute or a <style> tag or in an external style sheet — are subject to the grammar rules specified in the CSS specifications, which are generally less case-sensitive. That being said, the SVG “Styling”28 specification recommends using the exact property names (usually, lowercase letters and hyphens) as defined in the CSS specifications and expressing all keywords in the same case, as required by presentation attributes, and not taking advantage of CSS’s ability to ignore case.
Animating SVGs With CSS
SVGs can be animated the same way that HTML elements can, using CSS keyframes and animation properties or using CSS transitions.
In most cases, complex animations will usually contain some kind of transformation — a translation, a rotation, scaling and/or skewing.
In most respects, SVG elements respond to transform and transform-origin in the same way that HTML elements do. However, a few inevitable differences result from the fact that, unlike HTML elements, SVG elements aren’t governed by a box model and, hence, have no margin, border, padding or content boxes.
By default, the transform origin of an HTML element is at (50%, 50%), which is the element’s center. By contrast, an SVG element’s transform origin is positioned at the origin of the user’s current coordinate system, which is the (0, 0) point, in the top-left corner of the canvas.
Suppose we have an HTML <div> and an SVG <rect> element:
If were were to rotate both of them by 45 degrees, without changing the default transform origin, we would get the following result (the red circle indicates the position of the transform origin):
What if we wanted to rotate the SVG element around its own center, rather than the top-left corner of the SVG canvas? We would need to explicitly set the transform origin using the transform-origin property.
Setting the transform origin on an HTML element is straightforward: Any value you specify will be set relative to the element’s border box.
In SVG, the transform origin can be set using either a percentage value or an absolute value (for example, pixels). If you specify a transform-origin value in percentages, then the value will be set relative to the element’s bounding box, which includes the stroke used to draw its border. If you specify the transform origin in absolute values, then it will be set relative to the SVG canvas’ current coordinate system of the user.
If we were to set the transform origin of the <div> and <rect> from the previous example to the center using percentage values, we would do this:
transform-origin: 50% 50%;
The resulting transformation would look like so:
That being said, at the time of writing, setting the transform origin in percentage values currently does not work in Firefox. This is a known bug33. So, for the time being, your best bet is to use absolute values so that the transformations behave as expected. You can still use percentage values for WebKit browsers, though.
In the following example, we have a pinwheel on a stick that we’ll rotate using CSS animation. To have the wheel rotate around its own center, we’ll set its transform origin in pixels and percentages:
You can check out the live result on Codepen34. Note that, at the time of writing, CSS 3D transformations are not hardware-accelerated when used on SVG elements; they have the same performance profile as SVG transform attributes. However, Firefox does accelerate transforms on SVGs to some extent.
Animating SVG Paths
Snap.svg is described as being to SVG what jQuery is to HTML, and it makes dealing with SVGs and its quirks a lot easier.
That being said, you could create an animated line-drawing effect using CSS. The animation would require you to know the total length of the path you’re animating and then to use the stroke-dashoffset and stroke-dasharray SVG properties to achieve the drawing effect. Once you know the length of the path, you can animate it with CSS using the following rules:
You can view the live demo on JS Bin37. Note that you can also write stroke-dasharray: 4000; instead of stroke-dasharray: 4000 — if the two line and gap values are equal, then you can specify only one value to be applied to both.
var path = document.querySelector('.drawing-path');
//set CSS properties up
path.style.strokeDasharray = length;
path.style.strokeDashoffset = length;
//set transition up
path.style.transition = 'stroke-dashoffset 2s ease-in-out';
path.style.strokeDashoffset = '0';
Jake Archibald has written an excellent article explaining the technique38 in more detail. Jake includes a nice interactive demo that makes it easy to see exactly what’s going on in the animation and how the two SVG properties work together to achieve the desired effect. I recommend reading his article if you’re interested in learning more about this technique.
An SVG can be embedded in a document in six ways, each of which has its own pros and cons.
The reason we’re covering embedding techniques is because the way you embed an SVG will determine whether certain CSS styles, animations and interactions will work once the SVG is embedded.
An SVG can be embedded in any of the following ways:
as an image using the <img> tag: <img src="mySVG.svg" alt="" />
as a background image in CSS: .el background-image: url(mySVG.svg);
as an object using the <object> tag: <object type="image/svg+xml" data="mySVG.svg"><!-- fallback here --></object>
as an iframe using an <iframe> tag: <iframe src="mySVG.svg"><!-- fallback here →</iframe>
using the <embed> tag: <embed type="image/svg+xml" src="mySVG.svg" />
inline using the <svg> tag: <svg version="1.1" xmlns="http://www.w3.org/2000/svg" …> <!-- svg content → </svg>
The <object> tag is the primary way to include an external SVG file. The main advantage of this tag is that there is a standard mechanism for providing an image (or text) fallback in case the SVG does not render. If the SVG cannot be displayed for any reason — such as because the provided URI is wrong — then the browser will display the content between the opening and closing <object> tags.
If you intend using any advanced SVG features, such as CSS or scripting, the then HTML5 <object> tag is your best bet.
The <iframe> tag, just like the <object> tag, comes with a default way to provide a fallback for browsers that don’t support SVG, or those that do support it but can’t render it for whatever reason.
The <embed> tag was never a part of any HTML specification, but it is still widely supported. It is intended for including content that needs an external plugin to work. The Adobe Flash plugin requires the <embed> tag, and supporting this tag is the only real reason for its use with SVG. The <embed> tag does not come with a default fallback mechanism.
An SVG can also be embedded in a document inline — as a “code island” — using the <svg> tag. This is one of the most popular ways to embed SVGs today. Working with inline SVG and CSS is a lot easier because the SVG can be styled and animated by targeting it with style rules placed anywhere in the document. That is, the styles don’t need to be included between the opening and closing <svg> tags to work; whereas this condition is necessary for the other techniques.
Embedding SVGs inline is a good choice, as long as you’re willing to add to the size of the page and give up backwards compatibility (since it does not come with a default fallback mechanism either). Also, note that an inline SVG cannot be cached.
An SVG embedded with an <img> tag and one embedded as a CSS background image are treated in a similar way when it comes to CSS styling and animation. Styles and animations applied to an SVG using an external CSS resource will not be preserved once the SVG is embedded.
The following table shows whether CSS animations and interactions (such as hover effects) are preserved when an SVG is embedded using one of the six embedding techniques, as compared to SVG SMIL animations40. The last column shows that, in all cases, SVG animations (SMIL) are preserved.
Table showing whether CSS styles, animations and interactions are preserved for each of the SVG embedding techniques.
CSS Interactions (e.g. :hover)
SVG Animations (SMIL)
Yes only if inside <svg>
CSS background image
Yes only if inside <svg>
Yes only if inside <svg>
Yes only if inside <svg>
Yes only if inside <svg>
Yes only if inside <svg>
Yes only if inside <svg>
Yes only if inside <svg>
Yes only if inside <svg>
Yes only if inside <svg>
The behavior indicated in the table above is the standard behavior. However, implementations may differ between browsers, and bugs may exist.
Note that, even though SMIL animations will be preserved, SMIL interactions will not work for an SVG embedded as an image (i.e. <img> or via CSS).
Making SVGs Responsive
After embedding an SVG, you need to make sure it is responsive.
Depending on the embedding technique you choose, you might need to apply certain hacks and fixes to get your SVG to be cross-browser responsive. The reason for this is that the way browsers determine the dimensions of an SVG differs for some embedding techniques, and SVG implementations among browsers also differ. Therefore, the way SVG is handled is different and requires some style tweaking to make it behave consistently across all browsers.
I won’t get into details of browser inconsistencies, for the sake of brevity. I will only cover the fix or hack needed for each embedding technique to make the SVG responsive in all browsers for that technique. For a detailed look at the inconsistencies and bugs, check out my article on Codrops4441.
Whichever technique you choose, the first thing you’ll need to do is remove the height and width attributes from the root <svg> element.
You will need to preserve the viewBox attribute and set the preserveAspectRatio attribute to xMidYMid meet — if it isn’t already set to that value. Note that you might not need to explicitly set preserveAspectRatio to xMidYMid meet at all because it will default to this value anyway if you don’t change it.
When an SVG is embedded as a CSS background image, no extra fixes or hacks are needed. It will behave just like any other bitmap background image and will respond to CSS’ background-image properties as expected.
An SVG embedded using an <img> tag will automatically be stretched to the width of the container in all browsers (once the width has been removed from the <svg>, of course). It will then scale as expected and be fluid in all browsers except for Internet Explorer (IE). IE will set the height of the SVG to 150 pixels, preventing it from scaling correctly. To fix this, you will need to explicitly set the width to 100% on the <img>.
The only way to make an iframe responsive while maintaining the aspect ratio of the SVG is by using the “padding hack” pioneered by Thierry Koblentz on A List Apart43. The idea behind the padding hack is to make use of the relationship of an element’s padding to its width in order to create an element with an intrinsic ratio of height to width.
When an element’s padding is set in percentages, the percentage is calculated relative to the width of the element, even when you set the top or bottom padding of the element.
To apply the padding hack and make the SVG responsive, the SVG needs to be wrapped in a container, and then you’ll need to apply some styles to the container and the SVG (i.e. the iframe), as follows:
<!-- wrap svg in a container -->
<!-- fallback here -->
/* collapse the container's height */
/* specify any width you want (a percentage value, basically) */
/* apply padding using the following formula */
/* this formula makes sure the aspect ratio of the container equals that of the SVG graphic */
padding-top: (svg-height / svg-width) * width-value;
position: relative; /* create positioning context for SVG */
The svg-height and svg-width variables are the values of the height and width of the <svg>, respectively — the dimensions that we removed earlier. And the width-value is any width you want to give the SVG container on the page.
Finally, the SVG itself (the iframe) needs to be positioned absolutely inside the container:
We position the iframe absolutely because collapsing the container’s height and then applying the padding to it would push the iframe beyond the boundaries of the container. So, to “pull it back up,” we position it absolutely. You can read more about the details in my article on Codrops4441.
Finally, an SVG embedded inline in an <svg> tag becomes responsive when the height and width are removed, because browsers will assume a width of 100% and will scale the SVG accordingly. However, IE has the same 150-pixel fixed-height issue for the <img> tag mentioned earlier; unfortunately, setting the width of the SVG to 100% is not sufficient to fix it this time.
To make the inline SVG fluid in IE, we also need to apply the padding hack to it. So, we wrap <svg> in a container, apply the padding-hack rules mentioned above to the container and, finally, position the <svg> absolutely inside it. The only difference here is that we do not need to explicitly set the height and width of <svg> after positioning it.
Using CSS Media Queries
SVG accepts and responds to CSS media queries as well. You can use media queries to change the styles of an SVG at different viewport sizes.
However, one important note here is that the viewport that the SVG responds to is the viewport of the SVG itself, not the page’s viewport!
An SVG embedded with an <img>, <object> or <iframe> will respond to the viewport established by these elements. That is, the dimensions of these elements will form the viewport inside of which the SVG is to be drawn and, hence, will form the viewport to which the CSS media-query conditions will be applied.
The following example includes a set of media queries inside an SVG that is then referenced using an <img> tag:
<svg xmlns="http://www.w3.org/2000/svg" version="1.1" viewBox="0 0 194 186">
@media all and (max-width: 50em)
/* select SVG elements and style them */
@media all and (max-width: 30em)
/* styles */
<!-- SVG elements here -->
When the SVG is referenced, it will get the styles specified in the media queries above when the <img> has a max-width of 50em or 30em, respectively.
SVGs are images, and just as images can be accessible, so can SVGs. And making sure your SVGs are accessible is important, too.
I can’t emphasize this enough: Make your SVGs accessible. You can do several things to make that happen. For a complete and excellent guide, I recommend Leonie Watson’s excellent article on SitePoint47. Her tips include using the <title> and <desc> tags in the <svg>, using ARIA attributes and much more.
In addition to accessibility, don’t forget to optimize your SVGs and provide fallbacks for non-supporting browsers. I recommend Todd Parker’s presentation48.
Last but not least, you can always check support for different SVG features on Can I Use49. I hope you’ve found this article to be useful. Thank you for reading.
There are more than 16 million colors and any great blog-post that you come across on the internet will tell you the “feelings” conveyed by only a handful of colors. If you sell to people from different ethnicity and cultures, choosing colors for your website can become even more difficult as one color that relates to wealth and prosperity in a country may relate to mourning in another. How do you go about it then?
In this post I will help you choose colors for your website’s CTAs, background and other important entities that you want people to focus on. A believer of “one size doesn’t fit all” and “data (not opinions and experience) gets most respect“, I will not be able to spill out some magic potion and tell you the exact colors you should use. But I promise to take you through 3 actionable tips that you could go back and test right away to increase your website’s conversions.
1) Color the Primary Goal of your Website to Make it Stand Out
Imagine a shopping list of 20 items, all items written in blue ink except for one which is in red. If asked to scan this list for 10 seconds, which item do you think you are most likely to recall later? Multiple experiments have confirmed that outliers (or the item in red in the example) is what people remember most often. This is because of a phenomenon known as the Von Restroff effect (also known as isolation effect) which states that an item that stands out is more likely to be remembered than others.
Applying this to your websites, if you want your calls to action to get immediate attention, make them stand out. Use a color that has high contrast compared to your background and that hasn’t been used for any other entity on the page. Look at how Facebook and LinkedIn do it on their homepage:
Choosing a contrasting color for your primary CTA is not very difficult. You just have to look for a color diagonally opposite to that of your background color or most-used color on your page from the color wheel.
Let’s for a moment go back to the red button v/s green button case study. Have a closer look at the screenshot below. You will find that the color scheme of the original page has some emphasis towards green. The Performable logo is green, the screenshot used on the page has some elements in green and one of the features also has an icon in green. A quick scan doesn’t really make the CTA stand out from the rest of the elements. I wouldn’t be surprised if testing the original page against a variation with the CTA in yellow or orange would produce same or better results.
The important takeaway from this case study is to create a visual contrast for your goal. End of the day, it’s not the button color that is going to sell your stuff but how prominently you display it for people to take a decision before abandoning your website for the competitors’.
2) Choose Colors that are “All”-User Friendly
In United States alone, about 7% of males (roughly 10.5 million men) and 0.4% of females have some form of color blindness. In Australia, these percentages are 8 for men and 0.4 for women. The most common problem being difficulty in telling red from green.
Needless to say, when deciding colors for your website and the areas where you want people to focus on, it becomes imperative to keep in mind people who have some form of color blindness. And if you have a SaaS product, that shows some results in charts and graphs, it becomes even more important to choose the right colors so that they are easily distinguishable for everyone. See below, how a contrast between foreground and background appears to people with certain forms of color blindness. You will notice that while eyes with normal vision would easily be able to read the text, people with Protanopia and Deuteranopia (most common forms of color blindness) will just not be able to read what’s written.
Common solutions to ensure a great experience for everyone:
Choose colors many steps away from each other on the color wheel
Use tints (mixture of color with white) for background and shades (mixture of color with black) for foreground (or vice versa). Or make one element even more dark and the other even more light to create better contrast.
3) Train Visitors with your Color Key
Consider how bar graphs work. To look at data of one particular type, you just follow its color or pattern. Once you understand what a particular color or pattern bar stands for, you are able to compare easily focusing only on that particular color or pattern.
Similarly, if you use one color consistently on your website for a particular CTA (say signup), you will subconsciously train your users with the meaning of that color on the website. As an example, let’s suppose someone is evaluating a SaaS product on your website. And you have a shiny orange button for free trial on every page. When done evaluating their eyes will look for the orange thing, on whichever page they are, to sign up.
This way, you can even tell them which colors correspond to a heading, which means links and which call for a purchase.
See how CampaignMonitor does it beautifully. CTA buttons on all of their pages, which ask people to sign up for an account, are in green. And for no other CTA has the same color been used. This createa a consistent visual memory for visitors.
How has your experience with website colors been? Tried any A/B tests that worked well? Or may be which didn’t? Would love to hear all of it in the comments section below!