(This is a sponsored article.) Icons are everywhere. They have been around for a long time, and it’s difficult to imagine a world without icons or symbols today. Only designers will know how much effort and time is needed to not only make them helpful but also simple and expressive.
What makes icons particularly special is perhaps the fact that their meaning can be understood without having to add any text or further details.
Editor’s Note: This article originally appeared on Inclusive Components. If you’d like to know more about similar inclusive component articles, follow @inclusicomps on Twitter or subscribe to the RSS feed. By supporting inclusive-components.design on Patreon, you can help to make it the most comprehensive database of robust interface components available.
Classification is hard. Take crabs, for example. Hermit crabs, porcelain crabs, and horseshoe crabs are not — taxonomically speaking — true crabs.
When did you take your last vacation? For many of us, it was probably a long time ago. However, since quite a while, I stumble across more and more stories about companies that take unusual steps vacation-wise. Companies giving their employees a day off each week in summer or going on vacation together as a team building event instead of traveling somewhere just to work.
But while there’s a new generation building their dream work environments, a lot of people still suffer from very bad working conditions.
Buttons are a common element of interaction design. While they may seem like a very simple UI element, they are still one of the most important ones to create.
In today’s article, we’ll be covering the essential items you need to know in order to create effective controls that improve user experience. If you’d like to take a go at prototyping and wireframing your own designs a bit more differently, you can download and test Adobe XD for free.
After a few years of designing products for clients, I began to feel fatigued. I wondered why. Turns out, I’d been chasing metric after metric. “Increase those page views!” “Help people spend more time in the app!” And it kept coming. Still, something was missing. I knew that meeting goals was part of what a designer does, but I could see how my work could easily become commoditized and less fulfilling unless something changed.
A few weeks ago, a Fortune 500 company asked that I review their A/B testing strategy.
The results were good, the hypotheses strong, everything seemed to be in order… until I looked at the log of changes in their testing tool.
I noticed several blunders: in some experiments, they had adjusted the traffic allocation for the variations mid-experiment; some variations had been paused for a few days, then resumed; and experiments were stopped as soon as statistical significance was reached.
When it comes to testing, too many companies worry about the “what”, or the design of their variations, and not enough worry about the “how”, the execution of their experiments.
Don’t get me wrong, variation design is important: you need solid hypotheses supported by strong evidence. However, if you believe your work is finished once you have come up with variations for an experiment and pressed the launch button, you’re wrong.
In fact, the way you run your A/B tests is the most difficult and most important piece of the optimization puzzle.
There are three kinds of lies: lies, damned lies, and statistics.
– Mark Twain
In this post, I will share the biggest mistakes you can make within each step of the testing process: the design, launch, and analysis of an experiment, and how to avoid them.
This post is fairly technical. Here’s how you should read it:
If you are just getting started with conversion optimization (CRO), or are not directly involved in designing or analyzing tests, feel free to skip the more technical sections and simply skim for insights.
If you are an expert in CRO or are involved in designing and analyzing tests, you will want to pay attention to the technical details. These sections are highlighted in blue.
Mistake #1: Your test has too many variations
The more variations, the more insights you’ll get, right?
Not exactly. Having too many variations slows down your tests but, more importantly, it can impact the integrity of your data in 2 ways.
First, the more variations you test against each other, the more traffic you will need, and the longer you’ll have to run your test to get results that you can trust. This is simple math.
But the issue with running a longer test is that you are more likely to be exposed to cookie deletion. If you run an A/B test for more than 3–4 weeks, the risk of sample pollution increases: in that time, people will have deleted their cookies and may enter a different variation than the one they were originally in.
Within 2 weeks, you can get a 10% dropout of people deleting cookies and that can really affect your sample quality.
The second risk when testing multiple variations is that the significance level goes down as the number of variations increases.
For example, if you use the accepted significance level of 0.05 and decide to test 20 different scenarios, one of those will be significant purely by chance (20 * 0.05). If you test 100 different scenarios, the number goes up to five (100 * 0.05).
In other words, the more variations, the higher the chance of a false positive i.e. the higher your chances of finding a winner that is not significant.
Google’s 41 shades of blue is a good example of this. In 2009, when Google could not decide which shades of blue would generate the most clicks on their search results page, they decided to test 41 shades. At a 95% confidence level, the chance of getting a false positive was 88%. If they had tested 10 shades, the chance of getting a false positive would have been 40%, 9% with 3 shades, and down to 5% with 2 shades.
You can calculate the chance of getting a false positive using the following formula: 1-(1-a)^m with m being the total number of variations tested and a being the significance level. With a significance level of 0.05, the equation would look like this:
1-(1-0.05)^m or 1-0.95^m.
You can fix the multiple comparison problem using the Bonferroni correction, which calculates the confidence level for an individual test when more than one variation or hypothesis is being tested.
Wikipedia illustrates the Bonferroni correction with the following example: “If an experimenter is testing m hypotheses, [and] the desired significance level for the whole family of tests is a, then the Bonferroni correction would test each individual hypothesis at a significance level of a/m.
For example, if [you are] testing m = 8 hypotheses with a desired a = 0.05, then the Bonferroni correction would test each individual hypothesis at a = 0.05/8=0.00625.”
In other words, you’ll need a 0.625% significance level, which is the same as a 99.375% confidence level (100% – 0.625%) for an individual test.
The Bonferroni correction tends to be a bit too conservative and is based on the assumption that all tests are independent of each other. However, it demonstrates how multiple comparisons can skew your data if you don’t adjust the significance level accordingly.
The following tables summarize the multiple comparison problem.
Probability of a false positive with a 0.05 significance level:
Adjusted significance and confidence levels to maintain a 5% false discovery probability:
In this section, I’m talking about the risks of testing a high number of variations in an experiment. But the same problem also applies when you test multiple goals and segments, which we’ll review a bit later.
Each additional variation and goal adds a new combination of individual statistics for online experiments comparisons to an experiment. In a scenario where there are four variations and four goals, that’s 16 potential outcomes that need to be controlled for separately.
Some A/B testing tools, such as VWO and Optimizely, adjust for the multiple comparison problem. These tools will make sure that the false positive rate of your experiment matches the false positive rate you think you are getting.
In other words, the false positive rate you set in your significance threshold will reflect the true chance of getting a false positive: you won’t need to correct and adjust the confidence level using the Bonferroni or any other methods.
One final problem with testing multiple variations can occur when you are analyzing the results of your test. You may be tempted to declare the variation with the highest lift the winner, even though there is no statistically significant difference between the winner and the runner up. This means that, even though one variation may be performing better in the current test, the runner up could “win” in the next round.
You should consider both variations as winners.
Mistake #2: You change experiment settings in the middle of a test
When you launch an experiment, you need to commit to it fully. Do not change the experiment settings, the test goals, the design of the variation or of the Control mid-experiment. And don’t change traffic allocations to variations.
Changing the traffic split between variations during an experiment will impact the integrity of your results because of a problem known as Simpson’s Paradox.This statistical paradox appears when we see a trend in different groups of data which disappears when those groups are combined.
Ronny Kohavi from Microsoft shares an example wherein a website gets one million daily visitors, on both Friday and Saturday. On Friday, 1% of the traffic is assigned to the treatment (i.e. the variation), and on Saturday that percentage is raised to 50%.
Even though the treatment has a higher conversion rate than the Control on both Friday (2.30% vs. 2.02%) and Saturday (1.2% vs. 1.00%), when the data is combined over the two days, the treatment seems to underperform (1.20% vs. 1.68%).
This is because we are dealing with weighted averages. The data from Saturday, a day with an overall worse conversion rate, impacted the treatment more than that from Friday.
We will return to Simpson’s Paradox in just a bit.
Changing the traffic allocation mid-test will also skew your results because it alters the sampling of your returning visitors.
Changes made to the traffic allocation only affect new users. Once visitors are bucketed into a variation, they will continue to see that variation for as long as the experiment is running.
So, let’s say you start a test by allocating 80% of your traffic to the Control and 20% to the variation. Then, after a few days you change it to a 50/50 split. All new users will be allocated accordingly from then on.
However, all the users that entered the experiment prior to the change will be bucketed into the same variation they entered previously. In our current example, this means that the returning visitors will still be assigned to the Control and you will now have a large proportion of returning visitors (who are more likely to convert) in the Control.
Note: This problem of changing traffic allocation mid-test only happens if you make a change at the variation level. You can change the traffic allocation at the experiment level mid-experiment. This is useful if you want to have a ramp up period where you target only 50% of your traffic for the first few days of a test before increasing it to 100%. This won’t impact the integrity of your results.
As I mentioned earlier, the “do not change mid-test rule” extends to your test goals and the designs of your variations. If you’re tracking multiple goals during an experiment, you may be tempted to change what the main goal should be mid-experiment. Don’t do it.
All Optimizers have a favorite variation that we secretly hope will win during any given test. This is not a problem until you start giving weight to the metrics that favor this variation. Decide on a goal metric that you can measure in the short term (the duration of a test) and that can predict your success in the long term. Track it and stick to it.
It is useful to track other key metrics to gain insights and/or debug an experiment, if something looks wrong. However, these are not the metrics you should look at to make a decision, even though they may favor your favorite variation.
Let’s say you have avoided the 2 mistakes I’ve already discussed, and you’re pretty confident about the results you see in your A/B testing tool. It’s time to analyze the results, right?
Not so fast! Did you stop the test as soon as it reached statistical significance?
I hope not…
Statistical significance should not dictate when you stop a test. It only tells you if there is a difference between your Control and your variations. This is why you should not wait for a test to be significant (because it may never happen) or stop a test as soon as it is significant. Instead, you need to wait for the calculated sample size to be reached before stopping a test. Use a test duration calculator to understand better when to stop a test.
Now, assuming you’ve stopped your test at the correct time, we can move on to segmentation. Segmentation and personalization are hot topics in marketing right now, and more and more tools enable segmentation and personalization.
There are 2 main problems with post-test segmentation, however, that will impact the statistical validity of your segments (when done incorrectly).
The sample size of your segments is too small. You stopped the test when you reached the calculated sample size, but at a segment level the sample size is likely too small and the lift between segments has no statistical validity.
The multiple comparison problem. The more segments you compare, the greater the likelihood that you’ll get a false positive among those tests. With a 95% confidence level, you’re likely to get a false positive every 20 post-test segments you look at.
There are different ways to prevent these two issues, but the easiest and most accurate strategy is to create targeted tests (rather than breaking down results per segment post-test).
I don’t advocate against post-test segmentation―quite the opposite. In fact, looking at too much aggregate data can be misleading. (Simpson’s Paradox strikes back.)
The Wikipedia definition for Simpson’s Paradox provides a real-life example from a medical study comparing the success rates of two treatments for kidney stones.
The table below shows the success rates and numbers of treatments for treatments involving both small and large kidney stones.
The paradoxical conclusion is that treatment A is more effective when used on small stones, and also when used on large stones, yet treatment B is more effective when considering both sizes at the same time.
In the context of an A/B test, this would look something like this:
Simpson’s Paradox surfaces when sampling is not uniform—that is the sample size of your segments is different. There are a few things you can do to prevent getting lost in and misled by this paradox.
First, you can prevent this problem from happening altogether by using stratified sampling, which is the process of dividing members of the population into homogeneous and mutually exclusive subgroups before sampling. However, most tools don’t offer this option.
If you are already in a situation where you have to decide whether to act on aggregate data or on segment data, Georgi Georgiev recommends you look at the story behind the numbers, rather than at the numbers themselves.
“My recommendation in the specific example [illustrated in the table above] is to refrain from making a decision with the data in the table. Instead, we should consider looking at each traffic source/landing page couple from a qualitative standpoint first. Based on the nature of each traffic source (one-time, seasonal, stable) we might reach a different final decision. For example, we may consider retaining both landing pages, but for different sources.
In order to do that in a data-driven manner, we should treat each source/page couple as a separate test variation and perform some additional testing until we reach the desired statistically significant result for each pair (currently we do not have significant results pair-wise).”
In a nutshell, it can be complicated to get post-test segmentation right, but when you do, it will unveil insights that your aggregate data can’t. Remember, you will have to validate the data for each segment in a separate follow up test.
The execution of an experiment is the most important part of a successful optimization strategy. If your tests are not executed properly, your results will be invalid and you will be relying on misleading data.
It is always tempting to showcase good results. Results are often the most important factor when your boss is evaluating the success of your conversion optimization department or agency.
But results aren’t always trustworthy. Too often, the numbers you see in case studies are lacking valid statistical inferences: either they rely on too heavily on an A/B testing tool’s unreliable stats engine and/or they haven’t addressed the common pitfalls outlined in this post.
Use case studies as a source of inspiration, but make sure that you are executing your tests properly by doing the following:
If your A/B testing tool doesn’t adjust for the multiple comparison problem, make sure to correct your significance level for tests with more than 1 variation
Don’t change your experiment settings mid-experiment
Don’t use statistical significance as an indicator of when to stop a test, and make sure to calculate the sample size you need to reach before calling a test complete
Finally, keep segmenting your data post-test. But make sure you are not falling into the multiple comparison trap and are comparing segments that are significant and have a big enough sample size
Design patterns often have a bad reputation. They are often considered to be quick, lazy, off-the-shelf solutions that are applied blindly without consideration of the context of a problem. Solutions such as the almighty off-canvas navigation, the floating label pattern or carousels for featured products are some of the prominent ones.
This article isn’t about these patterns, though. This article features some of the slightly more obscure design patterns, such as responsive car-builder interfaces, mega dropdown navigation, content grids, maps and charts, as well as responsive art direction.
It’s not exactly a new subject, but lately I’ve had reason to revisit the skill of content modeling in my team’s work. Our experience has reached a point where the limitations of how we practice are starting to become clear. Our most common issue is that people tend to tie themselves and their mental models to a chosen platform and its conventions.
Instead of teaching people how to model content, we end up teaching them how to model content in Drupal, or how to model content in WordPress. But I’d prefer that we approach it from a focus on the best interests of users, regardless of which platform said content will end up in.
Stock Image or Real Image – what should you use? The debate has been raging for a while now. That’s unfortunate, because there is no one answer that will work for all businesses alike. Why speculate at all, when we can throw the contenders into an A/B test and sit back while statistics find us a winner? Think of it as WWE, except A/B tests are real, and they get you better business. Let’s get right to it then, shall we?
160 Driving Academy is an Illinois based firm that offers truck-driving classes and even guarantees a job upon graduation. Visitors to the site primarily use the contact form on the homepage, or the prominently displayed phone number, to contact the academy. Looking to improve the conversion rate on the truck-driving classes page, the academy reached out to SpectrumInc, a lead-generation software and internet marketing company. The rest (as they have not yet begun to say, but soon will) is a future of great conversions!
The academy had been using a stock image of a man driving a truck on its homepage. When SpectrumInc came on board, they decided to test the page with the photograph of a real student instead. The hypothesis was that the image of an actual student would outperform the stock image the academy had been using. On being asked about the background of this test, Brian McKenzie from SpectrumInc explains,
“… in this case we had a branded photo of an actual 160 Driving Academy student standing in front of a truck available, but we originally opted not to use it for the page out of concern that the student’s ‘University of Florida’ sweatshirt would send the wrong message to consumers trying to obtain an Illinois, Missouri, or Iowa license. (These states are about 2,000 kilometers from the University of Florida).”
Better sense prevailed, and they decided to test it anyway.
What Goals Were Tracked?
The primary conversion goal: Number of visits to the ‘Thank You’ page. These are the pages that visitors are taken to after they fill out a conversion form, like the ‘contact us’ form on the main page.
The secondary conversion goal: Number of visits to the ‘Registration’ page. The academy carries a CTA button on its page that says “Register for Classes”. A conversion would be recorded every time a visitor clicked on the button and visited the “Registration” page.
The Test: Stock Image or Real Image
An incredible 161% lift in conversions, at 98% confidence level. Or, the possibility for such a massive change in conversions occurring simply due to random chance (and not because the variation actually is better at converting visitors) is just 2%.
Secondary Goal: Registrations, too, saw a 38.4% spike on the variation compared to the control, at 98% confidence level.
Why did the Variation win?
As with any retrospective analysis, the key lies in exploring the data and connecting it to the knowledge that is already out there. First, let’s understand why images are such a big deal, and what part they play in user experience.
Short (and borrowed) answer: An image is worth a thousand words.
Concepts learned in the form of images are more easily and frequently recalled than other ideas learned through text. In fact, Wikipedia explains that this effect is much more pronounced in older people than the younger ones. So if your business targets the age group of 25+, images are a great way to pass on brand-related information for better recall.
Billion Dollar Graphics explain, and I quote, “human brain deciphers image elements simultaneously, while language is decoded in a linear, sequential manner taking more time to process.” This is further illustrated in the following image.
Do you see how much easier it is to understand that the reference is to a square from the image than from its textual description? In fact, if you are in the mood for some serious reading, I strongly recommend this incredibly insightful post on the power of visual communication.
This frequently quoted eye tracking study from NN also confirms that we spend more time dwelling on images on a webpage rather than on the text itself. When they tested an “About Us” page that contained thumbnail portraits of each of the members of the team, this is what was found:
“Here, the user spent 10% more time viewing the portrait photos than reading the biographies, even though the bios consumed 316% more space. It’s obvious from the gaze plot that the user was in a hurry and just wanted to get a quick overview of the FreshBooks team, and looking at photos is indeed faster than reading full paragraphs.”
Evidently, people focus more on images on a page than on the text itself. And they retain it longer. The case for images cannot be overemphasized.
Now that you and I agree upon the need for using images, let’s dive right into analyzing the case. We start with:
The Control, with the Stock Photograph
Why did it convert so poorly?
We Love Ignoring Images That Look Stock
Stock images were a rage back in the late 90s, when taking a good picture was best left to professionals with complex, expensive cameras. Naturally, online businesses that were just starting out had to resort to the relatively inexpensive and definitely good-looking stock photos.
Here’s the issue: we have been exposed to banner advertisements for so long that our eyes have gotten trained to ignore any web element that evokes the feel of an advertisement. The adage “familiarity breeds contempt” holds true and banner blindness has been confirmed to be a real phenomenon in numerous studies. More stock images, anyone?
Stock Images Are Not Unique
I popped the stock image from the client’s old homepage into TinEye, a reverse image search engine, and this is what it threw up.
That’s 30 other instances on the webpage where the same stock photo was found.
Just to hammer home the point, I let Google Image Search do its thing. And here’s what Google found for me.
That’s 175 results. So much for uniqueness and product differentiation.
So there are more of that image, how’s that a big deal, you might ask.
Where do you suppose the stock image of a man driving a truck would figure on the web?
That’s right, on other business websites that are related to trucks; websites your potential customer might have visited already. Google took just 0.45 seconds to find 175 places on the web where the image appeared. Human users would take longer, but they’ll get there eventually. And when a potential customer sees a familiar image on your site, how would they judge your business and its credibility?
Go on, ask me, how would anyone recollect seeing the same image somewhere in a corner of the web?
Enough of beating the life out of stock images. Actually, using stock images, in and of itself, is not the real problem. There are ways to use good, relevant stock images without running into the problem of duplicates; like having a Rights Managed Licence. Instead, the real problem is:
Using Irrelevant Stock Images
Okay, stop being yourself for a moment. Slip into the user’s shoes, and I promise we shall see better.
You are looking to get a truck licence. Google suggests you check out 160drivingacademy.com
So you do what you always do. You click and reach the site.
Real images evoke trust. On a business site, users are not looking for emotional gratification. They are looking for hints, information about what they’d get if they decide to buy your product/service. A website that uses real images screams at its users,
“This is exactly what you will get if you choose us! It’s great, and we know it!”
Get the trust, make the sale.
Over the years, we’ve been so indiscriminately exposed to every kind of scam, sham and spam, that we don’t trust easily. Least of all, on the internet. A website that reveals its offerings, plain and clear, tells us there won’t be any nasty surprises. Hence, we trust.
Clever Branding and the Hidden Call To Action
Without the variation image, there was exactly one part of the site that called out “160 Driving Academy”. With the variation, there are three such places.
We’ve already seen how our eyes are drawn to images much quicker than it is to text. The variation image draws attention to itself, and in the few seconds that a visitors’ eyes stay on it, the mind picks up two strong branding signals. The brand name itself, and the color associated with it — yellow — generously splashed across the truck in the image. A deceptively simple way to make sure that even users who bounce off the first time remember the brand. I think I wouldn’t be wrong in assuming that a considerable number of the conversions resulted from users who revisited the page.
No, that’s not all.
A call to action. That little big thing.
What better place to have it than in the image itself! That too, right next to the contact form. It gives the user direction on what’s to be done if they are interested in taking things ahead, and it creates urgency using the term “Today!”.
So there, little relevant things really matter.
Room for Further Testing
If you check the academy’s current page, you’ll see that the “Florida Gators” print has been edited out of the student’s sweatshirt. If you remember, Brian had pointed out how the reference to ‘Florida’ might confuse prospects who are primarily from Illinois. Removing the “confusing” text from the image should improve conversions even better. Brian also pointed out that the average age of a student at the academy is close to 40, while the student in the image is closer to 25. From this context, Brian shares his vision for further testing,
“..trying to narrow down whether pictures of actual customers, pictures of actual employees, or pictures of actual products/equipment/objects convert best. Then you can do more incremental tests, like whether a 40-year-old student would convert better than a 25-year-old or whether the student should be holding up his license or just standing in front of the truck.”
Are Your Images Relevant?
What do you think? Is relevance the most vital criterion in selecting an image?
If you feel so, I would like you to head back to your website and reconsider the relevance of the image(s) used. Are they relevant? Would you like some help figuring out if it’s relevant or not?
And if you feel relevance is not the primary consideration, I would love to know your take on it.
Tell us right here, or, if you are a person of few words (couldn’t help it) let us know on Twitter @VWO or, get to me straight @SharanTheSuresh.
Before I leave, here are two more brilliant ‘Stock Image vs Real Image’ case studies from our archive.
Countless algorithms for encrypting data exist in computer science. One of the lesser known and less common encryptions is ROT13, a derivative of the Caesar cypher1 encryption technique.
In this tutorial, we’ll learn about ROT13 encryption and how it works. We’ll see how text (or strings) can be programmatically encoded in ROT13 using PHP. Finally, we’ll code a WordPress plugin that scans a post for blacklisted words and replaces any in ROT13 encryption.
If you own a blog on which multiple authors or certain group of people have the privilege of publishing posts, then a plugin that encrypts or totally removes inappropriate words might come in handy
ROT13 (short for “rotate by 13 places,” sometimes abbreviated as ROT-13) is a simple encryption technique for English that replaces each letter with the one 13 places forward or back along the alphabet. So, A becomes N, B becomes O and so on up to M, which becomes Z. Then, the sequence continues at the beginning of the alphabet: N becomes A, O becomes B and so on up to Z, which becomes M.
A major advantage of ROT13 over other rot(N) techniques (where “N” is an integer that denotes the number of places down the alphabet in a Caesar cypher encryption) is that it is “self-inverse,” meaning that the same algorithm is applied to encrypt and decrypt data.
Below is a ROT13 table for easy reference.
| A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z |
| N | O | P | Q | R | S | T | U | V | W | X | Y | Z | A | B | C | D | E | F | G | H | I | J | K | L | M |
If we encrypted the domain smashingmagazine.com in ROT13, the result would be fznfuvatzntnmvar.pbz, and the sentence “Why did the chicken cross the road?” would become “Jul qvq gur puvpxra pebff gur ebnq?”
Note that only letters in the alphabet are affected by ROT13. Numbers, symbols, white space and all other characters are left unchanged.
Transforming Strings To ROT13 In PHP
PHP includes a function, str_rot13(), for converting a string to its ROT13-encoded value. To encode text in ROT13 using this function, pass the text as an argument to the function.
echo str_rot13('smashingmagazine.com'); // fznfuvatzntnmvar.pbz
echo str_rot13('The best web design and development blog'); // Gur orfg jro qrfvta naq qrirybczrag oybt
Using ROT13 In WordPress
Armed with this knowledge, I thought of ways it might be handy in WordPress. I ended up creating a plugin that encodes blacklisted words found in posts using ROT13.
The plugin consists of a textearea field (located in the plugin’s settings page) in which you input blacklisted words, which are then saved to the database for later reuse in WordPress posts.
Without further fussing, let’s start coding the plugin.
Plugin Name: Rot13 Words Blacklist
Plugin URI: http://smashingmagazine.com/
Description: A simple plugin that detects and encrypts blacklisted words in ROT13
Author: Agbonghama Collins
Author URI: http://w3guy.com
Text Domain: rot13
Domain Path: /lang/
As mentioned, the plugin will have a settings page with a textarea field that collects and saves blacklisted words to WordPress’ database (specifically the options table).
Below is a screenshot of what the plugin’s settings (or admin) page will look like.
Now that we know what the options page will look like, let’s build it using WordPress’ Settings API5
Building the Settings Page
First, we create a submenu item in the main “Settings” menu by using add_options_page(), with its parent function hooked to admin_menu action.
add_action( 'admin_menu', 'rot13_plugin_menu' );
* Add submenu to main Settings menu
__( 'Rot13 Blacklisted Words', 'rot13' ),
__( 'Rot13 Blacklisted Words', 'rot13' ),
The fifth parameter of add_options_page() is the function’s name (rot13_plugin_settings_page), which is called to output the contents of the page.
Below is the code for rot13_plugin_settings_page().
* Output the contents of the settings page.
echo '<div class="wrap">';
echo '<h2>', __( 'Rot13 Blacklisted Words', 'rot13' ), '</h2>';
echo '<form action="options.php" method="post">';
do_settings_sections( 'rot13-words-blacklist' );
settings_fields( 'rot13_settings_group' );
Next, we add a new section to the “Settings” page with add_settings_section(). The textarea field we mentioned earlier will be added to this section with add_settings_field(). Finally, the settings are registered with register_setting().
Below is the code for add_settings_section(), add_settings_field() and register_setting().
// Add the section
// Add the textarea field to the section.
__( 'Blacklisted words', 'rot13' ),
// Register our setting so that $_POST handling is done for us
register_setting( 'rot13_settings_group', 'rot13_plugin_option', 'sanitize_text_field' );
The three functions above must be enclosed in a function and hooked to the admin_init action, like so:
* Hook the Settings API to 'admin_init' action
// Add the section
// Add the textarea field to the section
__( 'Blacklisted words', 'rot13' ),
// Register our setting so that $_POST handling is done for us
register_setting( 'rot13_settings_group', 'rot13_plugin_option', 'sanitize_text_field' );
add_action( 'admin_init', 'rot13_settings_api_init' );
Lest I forget, here is the code for the rot13_setting_callback_function() and rot13_setting_section_callback_function() functions, which will output the textarea field and the description of the field (at the top of the section), respectively.
* Add a description of the field to the top of the section
echo '<p>' . __( 'Enter a list of words to blacklist, separated by commas (,)', 'rot13' ) . '</p>';
* Callback function to output the textarea form field
echo '<textarea rows="10" cols="60" name="rot13_plugin_option" class="code">' . esc_textarea( get_option( 'rot13_plugin_option' ) ) . '</textarea>';
At this point, we are done building the settings page for the plugin.
Up next is getting the plugin to detect blacklisted words and encrypt them with ROT13.
Detecting Blacklisted Words and Encrypting in ROT13
Here is an overview of how we will detect blacklisted words in a WordPress post:
A post’s contents are broken down into individual words and saved to an array ($post_words).
The blacklisted words that were saved by the plugin to the database are retrieved. They, too, are broken down into individual words and saved to an array ($blacklisted_words).
We iterate over the $post_words arrays and check for any word that is on the blacklist.
If a blacklisted word is found, then str_rot13() encodes it in ROT13.
It’s time to create the PHP function (rot13_filter_post_content()) that filters the contents of a post and then actually detects blacklisted words and encrypts them in ROT13.
Below is the code for the post’s filter.
* Encrypt every blacklisted word in ROT13
* @param $content string post content to filter
* @return string
function rot13_filter_post_content( $content )
// Get the words marked as blacklisted by the plugin
$blacklisted_words = esc_textarea( get_option( 'rot13_plugin_option' ) );
// If no blacklisted word are defined, return the post's content.
if ( empty( $blacklisted_words ) )
// Ensure we are dealing with "posts", not "pages" or any other content type.
if ( is_singular( 'post' ) )
// Confine each word in a post to an array
$post_words = preg_split( "/b/", $content );
// Break down the post's contents into individual words
$blacklisted_words = explode( ',', $blacklisted_words );
// Remove any leading or trailing white space
$blacklisted_words = array_map(
function ( $arg )
return trim( $arg );
// Iterate over the array of words in the post
foreach ( $post_words as $key => $value )
// Iterate over the array of blacklisted words
foreach ( $blacklisted_words as $words )
// Compare the words, being case-insensitive
if ( strcasecmp( $post_words[ $key ], $words ) == 0 )
// Encrypt any blacklisted word
$post_words[ $key ] = '<del>' . str_rot13( $value ) . '</del>';
// Convert the individual words in the post back into a string or text
$content = implode( '', $post_words );
add_filter( 'the_content', 'rot13_filter_post_content' );
While the code above for the filter function is quite easy to understand, especially because it is so heavily commented, I’ll explain a bit more anyway.
The is_singular( 'post' ) conditional tag ensures that we are dealing with a post, and not a page or any other content type.
With preg_split(), we are breaking down the post’s contents into individual words and saving them as an array by searching for the RegEx pattern b, which matches word boundaries6.
The list of blacklisted words is retrieved from the database using get_option(), with rot13_plugin_option as the option’s name.
From the screenshot of the plugin’s settings page above and the description of the textarea field, we can see that the blacklisted words are separated by commas, our delimiter. The explode PHP function breaks down the blacklisted words into an array by searching for those commas.
A closure7 is applied to the $blacklisted_words array via array_map() that will trim leading and trailing white spaces from the array values (the individual blacklisted words).
The foreach construct iterates over the post’s words and check whether any word is in the array of blacklisted words. Any blacklisted word that gets detected is encrypted in ROT13 and enclosed in a <del> tag.
The $post_words array is converted back to a string or text and subsequently returned.
Finally, the function is hooked to the the_content filter action.
Below is a screenshot of a post with the words “love” and “forever” blacklisted.
ROT13 is a simple encryption technique that can be easily decrypted. Thus, you should never use it for serious data encryption.
Even if you don’t end up using the plugin, the concepts you’ve learned in creating it can be applied to many situations, such as obfuscating or encrypting inappropriate words (such as profanities) in ROT13, which would be a nice feature in a forum where people have the freedom to post anything.
Hopefully, you have learned a thing or two from this tutorial. If you have any question or a contribution, please let us know in the comments.