We always try our best to challenge your artistic abilities and produce some interesting, beautiful and creative artwork. As designers, we usually turn to different sources of inspiration. As a matter of fact, we’ve discovered the best one: desktop wallpapers that are a little more distinctive than the usual crowd.
This creativity mission has been going on for over five years now, and we’re very thankful to all the designers who have contributed and are still diligently contributing each month.
Today we are pleased to feature Smallicons, a set of 54 flat small icons. If you are looking for a way to make your design fresh and expressive, then this freebie is the answer. The set was created and designed by Nick Frost and Greg Lapin of Smallicons. [Links checked February/09/2017]
The freebie includes 36 icons drawn from a full commercial set available on Smallicons, plus 18 icons designed exclusively for Smashing Magazine.
Guesstimates by analysts put the number of mobile app downloads this year at somewhere between 56 and 82 billion, with the average user downloading somewhere between 26 and 41 apps, with a smaller subset of those apps being used on a regular basis. Other numbers indicate that 95% of downloaded apps are abandoned within a month and 26% of apps are only used once.
Depending on the user, these abandoned apps are deleted or ignored, never to be opened again.
Can you imagine your company having a chief electricity officer? Seems ridiculous doesn’t it, but many large businesses did when electricity first started to power the industrial economy.
Electricity is such an integral part of our working life that it is impossible to imagine life without it. Companies just couldn’t operate without power, but it wasn’t always that way. Many business leaders failed to grasp the full potential of electricity after it was first introduced.
In a previous blog post, I talked about the dangers of over-segmentation. Although I said segmentation is helpful, I didn’t elaborate on how segmentation should be done. Some readers even thought I was disparaging segmentation, which wasn’t my intent.
My previous post was aimed at helping marketers who have become over-enamoured with the promise of technology alone solving marketing optimization challenges.
Segmentation should indeed be a part of your conversion optimization strategy.
segmentation means putting structures in place to deliver appropriate messages to distinct audiences
The important bit in that definition is “distinct needs and expectations.” While there are tools that will target infinitesimally smaller segments based on small data hints and guesses, it can become spurious quickly, if it’s improperly tested.
There may be a potential difference in conversion rate between New York Times readers who research Mazda vehicles in Tennessee on sunny days (if any exist) and USA Today Volkswagen shoppers in Washington when it rains. But, how do you know that to be true? Is it a fact or an assumption?
You should test that!
Find out how your prospects perceive your message
The 8 Steps to Test Segmentation (and anything else too!)
Let’s look at how to create and test the most effective website segmentation in 8 steps.
How to create and test the most effective website segmentation in 8 steps
Web Analytics and segmentation tools are awesomely helpful here when they can look at big data, help you identify patterns, and give you fodder for hypothesis-creation. If you can pull in third-party data to gain richer insights into your prospects demographics, psychographics and behaviour, you’ll have even more patterns to observe. That’s the starting point.
Once you’ve identified potential patterns in the data, you need to develop hypotheses about how those patterns could perform with your audience moving forward. Remember that past patterns can often be misleading or simply caused by data clumping. As any good stockbroker will tell you, “Past performance does not guarantee future performance.”
This is where a lot of mistakes are made in segmentation. If you assume patterns you’ve observed are stable without testing against a control group, you’re likely to make a major error.
Don’t assume visitor segment patterns are stable without testing against a control group
A controlled test involves more than just implementing the assumed segments and seeing if they “work.” You need to A/B test your segmentation hypotheses against a control group where segments are not in place.
You need to A/B test your segmentation hypotheses against a control group
How to test website segmentation
Select a representative sample of your visitors
Create alternative segmentation hypotheses
Select test groups randomly from your visitor sample
Track conversions based on your most important goals
Compare performance of each segmentation hypothesis
How did the alternative hypotheses perform? Make sure you analyze the results based on metrics that lead to real business revenue. We recently ran a test where a variation that showed a 60% lift in clicks to the final step in the purchase process showed no different in revenue at all. Make sure you optimize for the right goals.
The advanced step in results analysis is to create inferences about the “Why” behind the test results. Why did a certain segment respond differently than another? In science, an inference is made when the cause behind a test result is unknowable or not practical to discover.
In science, an inference is like an educated guess about a cause.
Thinking about a website example, if you were to find that segmenting landing pages by a visitor’s stated hobby interest, you could make inferences about what those interests mean about the audience segments. You can’t directly measure why that’s the case. To get a direct observation about people’s interest, you’d need to directly survey or hook up every visitor in that test to an fMRI machine. Clearly, neither option would be practical or even possible in a statistically significant A/B test.
A recent WiderFunnel ]test for a client revealed interesting differences in segmentation conversion rates. We tested a single product landing page for a company with high brand awareness. The test strategy involved various landing page layouts as well as varying amounts of copywriting. The key difference between two of the pages was the form placement.
In one layout, the transaction form was above the fold in the right column. In an alternative variation, there was a button in that spot leading to the full form, which had been moved down the page below longer copywriting content.
Different segments respond better to different pages
It turned out that a specific traffic source from paid search traffic responded much better to the shorter copy version while the majority of visitors converted at a higher rate on the longer copy version.
Now that we have a data point showing this unique response, we can infer a reason behind that conversion rate difference. Perhaps the paid search target segment needed less convincing. More content for them was just a distraction. The great news is that this segment occurs on many websites, so we have a vast field of opportunities to validate this inference.
To be clear, a single test doesn’t prove this to be true. We’re making an inference that still needs to be validated.
What I’m talking about here is looking for the why behind the what. Don’t just be satisfied with finding a conversion rate lift. Aiming to understand the reason behind the result can lead to greater learning.
Aiming to understand the reason behind the test result can lead to greater learning
Here’s another area where mistakes are made. Once you confirm the reason behind a result, you haven’t really learned any substantial principles until it’s validated. Until you’ve found a robust pattern, you know nothing about the reason behind the result.
Thinking about this landing page segmentation example, we can now validate that learning against other landing pages in the same company.
But, we don’t have to stop there. Here’s where it gets exciting.
Looking at a series of similar tests in similar situations allows you to develop a theory about people and their responses. Theories are what lead you to robust scientific marketing learning and insight into how and why people act the way they do online.
A robust theory is one that will predict outcomes.
If you can identify a user behaviour scenario that matches your theory, you can predict the outcomes using your theory. Every successful outcome that your theory predicts strengthens its validity, giving you a powerful strategic tool in your conversion optimization arsenal.
That’s our goal at Widerfunnel, to build the world’s largest database of tested learning and robust marketing theories that will continue to deliver huge continuous website improvement for our clients. Over and over and over.
What do you think?
What’s your experience with web segmentation? What segments are most important in your audience?
The mobile application development landscape is filled with many ways to build a mobile app. Among the most popular are:
native iOS, native Android, PhoneGap, Appcelerator Titanium. This article marks the start of a series of four articles covering the technologies above. The series will provide an overview of how to build a simple mobile application using each of these four approaches. Because few developers have had the opportunity to develop for mobile using a variety of tools, this series is intended to broaden your scope.
Nowadays, with any Web app you build, you have dozens of architectural decisions to make. And you want to make the right ones: You want to use technologies that allow for rapid development, constant iteration, maximal efficiency, speed, robustness and more. You want to be lean and you want to be agile. You want to use technologies that will help you succeed in the short and long term. And those technologies are not always easy to pick out.
The toolkit includes tools for all levels of expertise, from beginner to landing page rockstar! Register and get instant access to 13 diagnostic exercises, plus a downloadable companion with templates, charts & online tools.
If you had to name one thing that could have been better at the last conference or meetup you attended, what would it be? I bet you’d say that the content or the interaction could have been better in some way. I created Onslyde to solve this problem. It’s a free service and open-source project that (hopefully) will make public speaking easier and conferences better.
The motivation for the project came from my own speaking engagements in the tech industry.
Most designers spend too much time with their designs to be objective about them. The best thing any designer can do is to collect feedback from real users. Testing uncovers pain points and flaws in a design that are not otherwise obvious.
Recently, I had an opportunity to experience this firsthand when iterating on HelloSign, the iOS app that enables users to scan, sign and send documents from their phone using the built-in camera.