Tag Archives: test

Thumbnail

20 Conversion Optimization Tips for Zooming Past Your Competition

20 Conversion Optimization Tips for Zooming Past Your Competition

Conversion optimization (CRO) is one of the most impactful things you can do as a marketer.

I mean, bringing traffic to a website is important (because without traffic you’re designing for an audience of crickets). But without a cursory understanding of conversion optimization—including research, data-driven hypotheses, a/b tests, and analytical capabilities—you risk making decisions for your website traffic using only gut feel.

CRO can give your marketing team ideas for what you can be doing better to convert visitors into leads or customers, and it can help you discover which experiences are truly optimal, using A/B tests.

However, as with many marketing disciplines, conversion optimization is constantly misunderstood. It’s definitely not about testing button colors, and it’s not about proving to your colleagues that you’re right.

I’ve learned a lot about how to do CRO properly over the years, and below I’ve compiled 20 conversion optimization tips to help you do it well, too.

Conversion Optimization Tip 1:
Learn how to run an A/B test properly

Running an A/B test (an online controlled experiment) is one of the core practices of conversion optimization.

Testing two or more variations of a given page to see which performs best can seem easy due to the increased simplification of testing software. However, it’s still a methodology that uses statistical inference to make a decision as to which variant is best delivered to your audience. And there are a lot of fine distinctions that can throw things off.

What is A/B Testing?

There are many nuances we could get into here—Bayesian vs. frequentist statistics, one-tailed vs. two-tailed tests, etc.—but to make things simple, here are a few testing rules that should help you breeze past most common testing mistakes:

  • Always determine a sample size in advance and wait until your experiment is over before looking at “statistical significance.” You can use one of several online sample size calculators to get yours figured out.
  • Run your experiment for a few full business cycles (usually weekly cycles). A normal experiment may run for three or four weeks before you call your result.
  • Choose an overall evaluation criterion (or north star metric) that you’ll use to determine the success of an experiment. We’ll get into this more in Tip 4.
  • Before running the experiment, clearly write your hypothesis (here’s a good article on writing a true hypothesis) and how you plan to follow up on the experiment, whether it wins or loses.
  • Make sure your data tracking is implemented correctly so you’ll be able to pull the right numbers after the experiment ends.
  • Avoid interaction effects if you’re running multiple concurrent experiments.
  • QA your test setup and watch the early numbers for any wonky technical mistakes.

I like to put all of the above fine details in an experiment document with a unique ID so that it can be reviewed later—and so the process can be improved upon with time.

An example of experiment documentation
An example of experiment documentation using a unique ID.
Tip 1: Ensure you take the time to set up the parameters of your A/B test properly before you begin. Early mistakes and careless testing can compromise the results.

Conversion Optimization Tip 2:
Learn how to analyze an A/B test

The ability to analyze your test after it has run is obviously important as well (and can be pretty nuanced depending on how detailed you want to get).

For instance, do you call a test a winner if it’s above 95% statistical significance? Well, that’s a good place to begin, but there are a few other considerations as you develop your conversion optimization chops:

  • Does your experiment have a sample ratio mismatch?
    Basically, if your test was set up so that 50% of traffic goes to the control and 50% goes to the variant, your end results should reflect this ratio. If the ratio is pretty far off, you may have had a buggy experiment. (Here’s a good calculator to help you determine this.)
  • Bring your data outside of your testing tool.
    It’s nice to see your aggregate data trends in your tool’s dashboard, and their math is a good first look, but I personally like to have access to the raw data. This way you can analyze it in Excel and really trust it. You can also import your data to Google Analytics to view the effects on key segments.

This can also open up the opportunity for further insights-driven experiments and personalization. Does one segment react overwhelmingly positive to a test you’ve run? Might be a good opportunity to implement personalization.

Checking your overall success metric first (winner, loser, inconclusive) and then moving to a more granular analysis of segments and secondary effects is common practice among CRO practitioners.

Here’s how Chris McCormick from PRWD explains the process:

Once we have a high level understanding of how the test has performed, we start to dig below the surface to understand if there are any patterns or trends occurring. Examples of this would be: the day of the week, different product sets, new vs returning users, desktop vs mobile etc.

Also, there are tons of great A/B test analysis tools out there, like this one from CXL:

AB Test Calculator
Tip 2: Analyze your data carefully by ensuring that your sample ratio is correct. Then export it to a spreadsheet where you can check your overall success metric before moving on to more granular indicators.

Conversion Optimization Tip 3:
Learn how to design your experiments

At the beginning, it’s important to consider the kind of experiment you want to run. There are a few options in terms of experimental design (at least, these are the most common ones online):

  1. A/B/n test
  2. Multivariate test
  3. Bandit test

A/B/n test

An A/B/n test is what you’re probably most used to.

It splits traffic equally among two or more variants and you determine which test won based on its effect size (assuming that other factors like sample size and test duration were sufficient).

ABCD Test Example
An A/B test with four variants: Image source

Multivariate test

In a multivariate test, on the other hand, you can test several variables on a page and hope to learn what the interaction effects are among elements.

In other words, if you were changing a headline, a feature image, and a CTA button, in a multivariate test you’d hope to learn which is the optimal combination of all of these elements and how they affect each other when grouped together.

A Multivariate Test

Generally speaking, it seems like experts run about ten a/b tests for every multivariate test. The strategy I go by is:

  • Use A/B testing to determine best layouts at a more macro-level.
  • Use MVT to polish the layouts to make sure all the elements interact with each other in the best possible way.

Bandit test

Bandits are a bit different. They are algorithms that seek to automatically update their traffic distribution based on indications of which result is best. Instead of waiting for four weeks to test something and then exposing the winner to 100% traffic, a bandit shifts its distribution in real time.

Experimental Design: Bandits

Bandits are great for campaigns where you’re looking to minimize regrets, such as short-term holiday campaigns and headline tests. They’re also good for automation at scale and targeting, specifically when you have lots of traffic and targeting rules and it’s tough to manage them all manually.

Unfortunately, while they are simpler from an experimental design perspective, they are much harder for engineers to implement technically. This is probably why they’re less common in the general marketing space, but an interesting topic nonetheless. If you want to learn more about bandits, read this article I wrote on the topic a few years ago.

Tip 3: Consider the kind of experiment you want to run. Depending on your needs, you might run an A/B/n test, a multivariate test, a bandit test, or some other form of experimental design.

Conversion Optimization Tip 4:
Choose your OEC

Returning to a point made earlier, it’s important to choose which north star metric you care about: this is your OEC (Overall Evaluation Criterion). If you don’t state this and agree upon it up front as stakeholders in an experiment, you’re welcoming the opportunity for ambiguous results and cherry-picked data.

Basically, we want to avoid the problem of HARKing: hypothesizing after results are known.

Twitter, for example, wrote on their engineering blog that they solve this by stating their overall evaluation criterion up front:

One way we guide experimenters away from cherry-picking is by requiring them to explicitly specify the metrics they expect to move during the set-up phase….An experimenter is free to explore all the other collected data and make new hypotheses, but the initial claim is set and can be easily examined.

The term OEC was popularized by Ronny Kohavi at Microsoft, and he’s written many papers that include the topic, but the sentiment is widely known by people who run lots of experiments. You need to choose which metric really matters, and which metric you’ll make decisions with.

Tip 4: In order to avoid ambiguous or compromised data, state your OEC (Overall Evaluation Criterion) before you begin and hold yourself to it. And never hypothesize after results are known.

Conversion Optimization Tip 5:
Some companies shouldn’t A/B test

You can still do optimization without A/B testing, but not every company can or should run A/B tests.

It’s a simple mathematical limitation:

Some businesses just don’t have the volume of traffic or discrete conversion events to make it worth running experiments.

Getting an adequate amount of traffic to a test ultimately helps ensure its validity, and you’ll need this as part of your sample size to ensure a test is cooked.

In addition, even if you could possibly squeeze out a valid test here and there, the marginal gains may not justify the costs when you compare it to other marketing activities in which you could engage.

That said, if you’re in this boat, you can still optimize. You can still set up adequate analytics, run user types on prototypes and new designs, watch session replays, and fix bugs.

Running experiments is a ton of fun, but not every business can or should run them (at least not until they bring some traffic and demand through the door first).

Tip 5: Determine whether your company can or even should run A/B tests. Consider both your volume of traffic and the resources you’ll need to allocate before investing the time.

Conversion Optimization Tip 6:
Landing pages help you accelerate and simplify testing

Using landing pages is correlated with greater conversions, largely because using them makes it easier to do a few things:

  • Measure discrete transitions through your funnel/customer journey.
  • Run controlled experiments (reducing confounding variables and wonky traffic mixes).
  • Test changes across templates to more easily reach a large enough sample size to get valid results.

To the first point, having a distinct landing page (i.e. something separate and easier to update than your website) gives you an easy tracking implementation, no matter what your user journey is.

For example, if you have a sidebar call to action that brings someone to a landing page, and then when they convert, they are brought to a “Thank You” page, it’s very easy to track each step of this and set up a funnel in Google Analytics to visualize the journey.

Google Analytics Funnel

Landing pages also help you scale your testing results while minimizing the resource cost of running the experiment. Ryan Farley, co-founder and head of growth at LawnStarter, puts it this way:

At LawnStarter, we have a variety of landing pages….SEO pages, Facebook landing pages, etc. We try to keep as many of the design elements such as the hero and explainer as similar as possible, so that way when we run a test, we can run it sitewide.

That is, if you find something that works on one landing page, you can apply it to several you have up and running.

Tip 6: Use landing pages to make it easier to test. Unbounce lets you build landing pages in hours—no coding required—and conduct unlimited A/B tests to maximize conversions.

Conversion Optimization Tip 7:
Build a growth model for your conversion funnel

Creating a model like this requires stepping back and asking, “how do we get customers?” From there, you can model out a funnel that best represents this journey.

Most of the time, marketers set up simple goal funnel visualization in Google Analytics to see this:

Google Analytics Funnel Visualization

This gives you a lot of leverage for future analysis and optimization.

For example, if one of the steps in your funnel is to land on a landing page, and your landing pages all have a similar format (e.g. offers.site.com), then you can see the aggregate conversion rate of that step in the funnel.

More importantly, you can run interesting analyses, such as path analysis and landing page comparison. Doing so, you can compare apples to apples with your landing pages and see which ones are underperforming:

Landing Page Comparison
The bar graph on the right allows you to quickly see how landing pages are performing compared to the site average.

I talk more about the process of finding underperforming landing pages in my piece on content optimization if you want to learn step-by-step how to do that.

Tip 7: Model out a funnel that represents the customer journey so that you can more easily target underperforming landing pages and run instructive analyses focused on growth.

Conversion Optimization Tip 8:
Pick low hanging fruit in the beginning

This is mostly advice from personal experience, so it’s anecdotal: when you first start working on a project or in an optimization role, pick off the low hanging fruit. By that, I mean over-index on the “ease” side of things and get some points on the board.

It may be more impactful to set up and run complex experiments that require many resources, but you’ll never pull the political influence necessary to set these up without some confidence in your abilities to get results as well as in the CRO process in general.

To inspire trust and to be able to command more resources and confidence, look for the easiest possible implementations and fixes before moving onto the complicated or risky stuff.

And fix bugs and clearly broken things first! Persuasive copywriting is pretty useless if your site takes days to load or pages are broken on certain browsers.

Tip 8: Score some easy wins by targeting low hanging fruit before you move on to more complex optimization tasks. Early wins give you the clout to drive bigger experiments later on.

Conversion Optimization Tip 9:
Where possible, reduce friction

Most conversion optimization falls under two categories (this is simplified, but mostly true):

  • Increasing motivation
  • Decreasing friction

Friction occurs when visitors become distracted, when they can’t accomplish a task, or simply when a task is arduous to accomplish. Generally speaking, the more “nice to have” your product is, the more friction matters to the conversion. This is reflected in BJ Fogg’s behavior model:

BJ FOGGs Behaviour Model

In other words, if you need to get a driver’s license, you’ll put up with pure hell at the DMV to get it, but you’ll drop out of the funnel at the most innocent error message if you’re only trying to buy something silly on drunkmall.com.

A few things that cut down on friction:

  • Make your site faster.
  • Trim needless form fields.
  • Cut down the amount of steps in your checkout or signup flow.

For an example on the last one, I like how Wordable designed their signup flow. You start out on the homepage:

Wordable

Click “Try It Free” and get a Google OAuth screen:

Wordable 0auth

Give permissions:

Wordable permissions

And voila! You’re in:

Wordable Dashboard

You can decrease friction by reducing feelings of uncertainty as well. Most of the time, this is done with copywriting or reassuring design elements.

An example is with HubSpot’s form builder. We emphasize that it’s “effortless” and that there is “no technical expertise required” to set it up:

Hubspot Form Builder

(And here’s a little reminder that HubSpot integrates beautifully with Unbounce, so you’ll be able to automatically populate your account with lead info collected on your Unbounce landing pages.)

Tip 9: Cut down on anything that makes it harder for users to convert. This includes making sure your site is fast and trimming any forms or steps that aren’t necessary for checkout or signup.

Conversion Optimization Tip 10:
Help increase motivation

The second side of the conversion equation, as I mentioned, is motivation.

An excellent way to increase the motivation of a visitor is simply to make the process of conversion…fun. Most tasks online don’t need to be arduous or frustrating, we’ve just made them that way due to apathy and error.

Take, for example, your standard form or survey. Pretty boring, right?

Well, today, enough technological solutions exist to implement interactive or conversational forms and surveys.

One such solution is Survey Anyplace. I asked their founder and CEO, Stefan Debois, about how their product helps motivate people to convert, and here’s what he said:

An effective and original way to increase conversion is to use an interactive quiz on your website. Compared to a static form, people are more likely to engage in a quiz, because they get back something useful. An example is Eneco, a Dutch Utility company: in just 6 weeks, they converted more than 1000 website visitors with a single quiz.

Full companies have been built on the premise that the typical form is boring and could be made more fun and pleasant to complete (e.g. TypeForm). Just think, “how can I compel more people to move through this process?”

Other ways to do this that are quite commonplace involve invoking certain psychological triggers to compel forward momentum:

  • Implement social proof on your landing pages.
  • Use urgency to compel users to act more quickly.
  • Build out testimonials with well-known users to showcase authority.

There are many more ways to use psychological triggers to motivate conversions. Check out Robert Cialdini’s classic book, Influence, to learn more. Also, check out The Wheel of Persuasion for inspiration on persuasive triggers.

Tip 10: Make your conversion process fun in order to compel your visitors to keep moving forward. Increased interactivity, social proof, urgency, and testimonials that showcase authority can all help you here too.

Conversion Optimization Tip 11:
Clarity > Persuasion

While persuasion and motivation are really important, often the best way to convert visitors is to ensure they understand what you’re selling.

Stated differently, clarity trumps persuasion.

Use a five-second test to find out how clear your messaging is.

Conversion Optimization Tip 12:
Consider the “Pre-Click” Experience

People forget the pre-click experience. What does a user do before they hit your landing pages? What ad did they click? What did they search in Google to get to your blog post?

Knowing this stuff can help you create strong message match between your pre-click experience and your landing page.

Sergiu Iacob, SEO Manager at Bannersnack, explains their process for factoring in keywords:

When it comes to organic traffic, we establish the user intent by analyzing all the keywords a specific landing page ranks for. After we determine what the end result should look like, we adjust both our landing page and our in app user journey. The same process is used in the optimization of landing pages for search campaigns.

I’ve recommended the same thing before when it comes to capturing email leads. If you can’t figure out why people aren’t converting, figure out what keywords are bringing them to your site.

Usually, this results in a sort of passive “voice of customer” mining, where you can message match the keywords you’re ranking for with the offer on that page.

It makes it much easier to predict what messages your visitors will respond to. And it is, in fact, one of the cheapest forms of user research you can conduct.

AHRefs Keywords
Using Ahrefs to determine what keywords brought traffic to a page.
Tip 12: Don’t forget the pre-click experience. What do your users do before they hit your landing page? Make sure you have a strong message match between your ads (or emails) and the pages they link to.

Conversion Optimization Tip 13:
Build a repeatable CRO process

Despite some popular blog posts, conversion optimization isn’t about a series of “conversion tactics” or “growth hacks.” It’s about a process and a mindset.

Here’s how Peep Laja, founder of CXL, put it:

The quickest way to figure out whether someone is an amateur or a pro is this: amateurs focus on tactics (make the button bigger, write a better headline, give out coupons etc) while pros have a process they follow.

And, ideally, the CRO process is a never-ending one:

CRO Process

Conversion Optimization Tip 14:
Invest in education for your team

CRO people have to know a lot about a lot:

  • Statistics
  • UX design
  • User research
  • Front end technology
  • Copywriting

No one comes out the gate as a 10 out of 10 in all of those areas (most never end up there either). You, as an optimizer, need to be continuously learning and growing. If you’re a manager, you need to make sure your team is continuously learning and growing.

Conversion Optimization Tip 15:
Share insights

The fastest way to scale and leverage experimentation is to share your insights and learnings among the organization.

This becomes more and more valuable the larger your company grows. It also becomes harder and harder the more you grow.

Essentially, by sharing you can avoid reinventing the wheel, you can bring new teammates up to speed faster, and you can scale and spread winning insights to teams who then shorten their time to testing. Invest in some sort of insights management system, no matter how basic.

Full products have been built around this, such as GrowthHackers’ North Star and Effective Experiments.

Effective Experiments
Tip 15: Share what you learn within your organization. The bigger your company grows, the more important information sharing becomes—but the more difficult it will become as well.

Conversion Optimization Tip 16:
Keep your cognitive biases in check

As the great Richard Feynman once said, “The first principle is that you must not fool yourself and you are the easiest person to fool.”

We’re all afflicted by cognitive biases, ranging from confirmation bias to the availability heuristic. Some of these can really impact our testing programs, specifically confirmation bias (and its close cousin, the Texas Sharpshooter Fallacy) where you only seek out pieces of data that confirm your previous beliefs and throw out those that go against them.

Experimenter Bias

It may be worthwhile (and entertaining) simply to run down Wikipedia’s giant list of cognitive biases and gauge where you may currently be running blind or biases.

Tip 16: Be cognizant of your own cognitive biases. If you’re not careful, they can influence the outcome of your experiments and cause you to miss (or misinterpret) key insights in your data.

Conversion Optimization Tip 17:
Evangelize CRO to your greater org

Having a dedicated CRO team is great. Evangelizing the work you’re doing to the rest of the organization? Even better.

Evangelize your CRO
Spread the word about the importance of CRO within your org.

When an entire organization buys into the value of data-informed decision making and experimentation, magical things can happen. Ideas burst forth, and innovation becomes easy. Annoying roadblocks are deconstructed. HiPPO-driven decision making is deprioritized behind proper experiments.

Things you can do to evangelize CRO and experimentation:

  • Write down your learnings each week on a company wiki.
  • Send out a newsletter with live experiments and experiment results each week to interested parties.
  • Recruit an executive sponsor with lots of internal influence.
  • Sing your praises when you get big wins. Sing it loud.
  • Make testing fun, and make it easier for others to join in and pitch ideas.
  • Make it easier for people outside of the CRO team to sponsor tests.
  • Say the word “hypothesis” a lot (who knows, it might work).

This is all a kind of art; there are no universal methods for spreading the good gospel of CRO. But it’s important that you know it’s probably going to be something of an uphill battle, depending on how big your company is and what the culture has traditionally been like.

Tip 17: Spread the gospel of CRO across your organization in order to ensure others buy into the value of data-driven decision making and experimentation.

Conversion Optimization Tip 18:
Be skeptical with CRO case studies

This isn’t so much a conversion optimization tip as it is life advice: be skeptical, especially when marketing is involved.

I say this as a marketer. Marketers exaggerate stuff. Some marketers omit important details that derail a narrative. Sometimes, they don’t understand p values, or how to set up a proper test (maybe they haven’t read Tip 1 in this article).

In short, especially in content marketing, marketers are incentivized to publish sensational case studies regardless of their statistical merit.

All of that results in a pretty grim standard for the current CRO case study.

Don’t get me wrong, some case studies are excellent, and you can learn a lot from them. Digital Marketer lays out a few rules for detecting quality case studies:

  • Did they publish total visitors?
  • Did they share the lift percentage correctly?
  • Did they share the raw conversions? (Does the lack of raw conversions hurt my case study?)
  • Did they identify the primary conversion metric?
  • Did they publish the confidence rate? Is it >90%?
  • Did they share the test procedure?
  • Did they only use data to justify the conclusion?
  • Did they share the test timeline and date?

Without context or knowledge of the underlying data, a case study might be a whole lot of nonsense. And if you want a good cathartic rant on bad case studies, then Andrew Anderson’s essay is a must-read.

According to a study...
Tip 18: Approach existing material on CRO with a skeptical mindset. Marketers are often incentivized to publish case studies with sensational results, regardless of the quality of the data that supports them.

Conversion Optimization Tip 19:
Calculate the cost of additional research vs. just running it

Matt Gershoff, CEO of Conductrics, is one of the smartest people I know regarding statistics, experimentation, machine learning, and general decision theory. He has stated some version of the following on a few occasions:

  • Marketing is about decision-making under uncertainty.
  • It’s about assessing how much uncertainty is reduced with additional data.
  • It must consider, “What is the value in that reduction of uncertainty?”
  • And it must consider, “Is that value greater than the cost of the data/time/opportunity costs?”

Yes, conversion research is good. No, you shouldn’t run blind and just test random things.

But at the end of the day, we need to calculate how much additional value a reduction in uncertainty via additional research gives us.

If you can run a cheap A/B test that takes almost no time to set up? And it doesn’t interfere with any other tests or present an opportunity cost? Ship it. Because why not?

But if you’re changing an element of your checkout funnel that could prove to be disastrous to your bottom line, well, you probably want to mitigate any possible downside. Bring out the heavy guns—user testing, prototyping, focus groups, whatever—because this is a case where you want to reduce as much uncertainty as possible.

Tip 19: Balance the value of doing more research with the costs (including opportunity costs) associated with it. Sometimes running a quick and dirty A/B test will be sufficient for your needs.

Conversion Optimization Tip 20:
CRO never ends

You can’t just run a few tests and call it quits.

The big wins from the early days of working on a relatively unoptimized site may taper off, but CRO never ends. Times change. Competitors and technologies come and go. Your traffic mix changes. Hopefully, your business changes as well.

As such, even the best test results are perishable, given enough time. So plan to stick it out for the long run and keep experimenting and growing.

Think Kaizen.

Kaizen

Conclusion

There you go, 20 conversion optimization tips. That’s not all there is to know; this is a never-ending journey, just like the process of growth and optimization itself. But these tips should get you started and moving in the right direction.

This article:

20 Conversion Optimization Tips for Zooming Past Your Competition

Thumbnail

How To Improve Test Coverage For Your Android App Using Mockito And Espresso




How To Improve Test Coverage For Your Android App Using Mockito And Espresso

Vivek Maskara



In app development, a variety of use cases and interactions come up as one iterates the code. The app might need to fetch data from a server, interact with the device’s sensors, access local storage or render complex user interfaces.

The important thing to consider while writing tests is the units of responsibility that emerge as you design the new feature. The unit test should cover all possible interactions with the unit, including standard interactions and exceptional scenarios.

In this article, we will cover the fundamentals of testing and frameworks such as Mockito and Espresso, which developers can use to write unit tests. I will also briefly discuss how to write testable code. I’ll also explain how to get started with local and instrumented tests in Android.

Recommended reading: How To Set Up An Automated Testing System Using Android Phones (A Case Study)

Fundamentals Of Testing

A typical unit test contains three phases.

  1. First, the unit test initializes a small piece of an application it wants to test.
  2. Then, it applies some stimulus to the system under test, usually by calling a method on it.
  3. Finally, it observes the resulting behavior.

If the observed behavior is consistent with the expectations, the unit test passes; otherwise, it fails, indicating that there is a problem somewhere in the system under test. These three unit test phases are also known as arrange, act and assert, or simply AAA. The app should ideally include three categories of tests: small, medium and large.

  • Small tests comprise unit tests that mock every major component and run quickly in isolation.
  • Medium tests are integration tests that integrate several components and run on emulators or real devices.
  • Large tests are integration and UI tests that run by completing a UI workflow and ensure that the key end-user tasks work as expected.

Note: An instrumentation test is a type of integration test. These are tests that run on an Android device or emulator. These tests have access to instrumentation information, such as the context of the app under test. Use this approach to run unit tests that have Android dependencies that mock objects cannot easily satisfy.

Writing small tests allows you to address failures quickly, but it’s difficult to gain confidence that a passing test will allow your app to work. It’s important to have tests from all categories in the app, although the proportion of each category can vary from app to app. A good unit test should be easy to write, readable, reliable and fast.

Here’s a brief introduction to Mockito and Espresso, which make testing Android apps easier.

Mockito

There are various mocking frameworks, but the most popular of them all is Mockito:

Mockito is a mocking framework that tastes really good. It lets you write beautiful tests with a clean & simple API. Mockito doesn’t give you hangover because the tests are very readable and they produce clean verification errors.

Its fluent API separates pre-test preparation from post-test validation. Should the test fail, Mockito makes it clear to see where our expectations differ from reality! The library has everything you need to write complete tests.

Espresso

Espresso helps you write concise, beautiful and reliable Android UI tests.

The code snippet below shows an example of an Espresso test. We will take up the same example later in this tutorial when we talk in detail about instrumentation tests.

@Test
public void setUserName() 
    onView(withId(R.id.name_field)).perform(typeText("Vivek Maskara"));
    onView(withId(R.id.set_user_name)).perform(click());
    onView(withText("Hello Vivek Maskara!")).check(matches(isDisplayed()));

Espresso tests state expectations, interactions and assertions clearly, without the distraction of boilerplate content, custom infrastructure or messy implementation details getting in the way. Whenever your test invokes onView(), Espresso waits to perform the corresponding UI action or assertion until the synchronization conditions are met, meaning:

  • the message queue is empty,
  • no instances of AsyncTask are currently executing a task,
  • the idling resources are idle.

These checks ensure that the test results are reliable.

Writing Testable Code

Unit testing Android apps is difficult and sometimes impossible. A good design, and only a good design, can make unit testing easier. Here are some of the concepts that are important for writing testable code.

Avoid Mixing Object Graph Construction With Application Logic

In a test, you want to instantiate the class under test and apply some stimulus to the class and assert that the expected behavior was observed. Make sure that the class under test doesn’t instantiate other objects and that those objects do not instantiate more objects and so on. In order to have a testable code base, your application should have two kinds of classes:

  • The factories, which are full of the “new” operators and which are responsible for building the object graph of your application;
  • The application logic classes, which are devoid of the “new” operator and which are responsible for doing the work.

Constructors Should Not Do Any Work

The most common operation you will do in tests is the instantiation of object graphs. So, make it easy on yourself, and make the constructors do no work other than assigning all of the dependencies into the fields. Doing work in the constructor not only will affect the direct tests of the class, but will also affect related tests that try to instantiate your class indirectly.

Avoid Static Methods Wherever Possible

The key to testing is the presence of places where you can divert the normal execution flow. Seams are needed so that you can isolate the unit of test. If you build an application with nothing but static methods, you will have a procedural application. How much a static method will hurt from a testing point of view depends on where it is in your application call graph. A leaf method such as Math.abs() is not a problem because the execution call graph ends there. But if you pick a method in a core of your application logic, then everything behind the method will become hard to test, because there is no way to insert test doubles

Avoid Mixing Of Concerns

A class should be responsible for dealing with just one entity. Inside a class, a method should be responsible for doing just one thing. For example, BusinessService should be responsible just for talking to a Business and not BusinessReceipts. Moreover, a method in BusinessService could be getBusinessProfile, but a method such as createAndGetBusinessProfile would not be ideal for testing. SOLID design principles must be followed for good design:

  • S: single-responsibility principle;
  • O: open-closed principle;
  • L: Liskov substitution principle;
  • I: interface segregation principle;
  • D: dependency inversion principle.

In the next few sections, we will be using examples from a really simple application that I built for this tutorial. The app has an EditText that takes a user name as input and displays the name in a TextView upon the click of a button. Feel free to take the complete source code for the project from GitHub. Here’s a screenshot of the app:


Testing example


Large preview

Writing Local Unit Tests

Unit tests can be run locally on your development machine without a device or an emulator. This testing approach is efficient because it avoids the overhead of having to load the target app and unit test code onto a physical device or emulator every time your test is run. In addition to Mockito, you will also need to configure the testing dependencies for your project to use the standard APIs provided by the JUnit 4 framework.

Setting Up The Development Environment

Start by adding a dependency on JUnit4 in your project. The dependency is of the type testImplementation, which means that the dependencies are only required to compile the test source of the project.

testImplementation 'junit:junit:4.12'

We will also need the Mockito library to make interaction with Android dependencies easier.

testImplementation "org.mockito:mockito-core:$MOCKITO_VERSION"

Make sure to sync the project after adding the dependency. Android Studio should have created the folder structure for unit tests by default. If not, make sure the following directory structure exists:

<Project Dir>/app/src/test/java/com/maskaravivek/testingExamples

Creating Your First Unit Test

Suppose you want to test the displayUserName function in the UserService. For the sake of simplicity, the function simply formats the input and returns it back. In a real-world application, it could make a network call to fetch the user profile and return the user’s name.

@Singleton
class UserService @Inject
constructor(private var context: Context) 
    
    fun displayUserName(name: String): String 
        val userNameFormat = context.getString(R.string.display_user_name)
        return String.format(Locale.ENGLISH, userNameFormat, name)
    
}

We will start by creating a UserServiceTest class in our test directory. The UserService class uses Context, which needs to be mocked for the purpose of testing. Mockito provides a @Mock notation for mocking objects, which can be used as follows:

@Mock internal var context: Context? = null

Similarly, you’ll need to mock all dependencies required to construct the instance of the UserService class. Before your test, you’ll need to initialize these mocks and inject them into the UserService class.

  • @InjectMock creates an instance of the class and injects the mocks that are marked with the annotations @Mock into it.
  • MockitoAnnotations.initMocks(this); initializes those fields annotated with Mockito annotations.

Here’s how it can be done:

class UserServiceTest 
    
    @Mock internal var context: Context? = null
    @InjectMocks internal var userService: UserService? = null
    
    @Before
    fun setup() 
        MockitoAnnotations.initMocks(this)
    
}

Now you are done setting up your test class. Let’s add a test to this class that verifies the functionality of the displayUserName function. Here’s what the test looks like:

@Test
fun displayUserName() 
    doReturn("Hello %s!").`when`(context)!!.getString(any(Int::class.java))
    val displayUserName = userService!!.displayUserName("Test")
    assertEquals(displayUserName, "Hello Test!")

The test uses a doReturn().when() statement to provide a response when a context.getString() is invoked. For any input integer, it will return the same result, "Hello %s!". We could have been more specific by making it return this response only for a particular string resource ID, but for the sake of simplicity, we are returning the same response to any input.
Finally, here’s what the test class looks like:

class UserServiceTest 
    @Mock internal var context: Context? = null
    @InjectMocks internal var userService: UserService? = null
    @Before
    fun setup() 
        MockitoAnnotations.initMocks(this)
    
     
    @Test
    fun displayUserName() 
        doReturn("Hello %s!").`when`(context)!!.getString(any(Int::class.java))
        val displayUserName = userService!!.displayUserName("Test")
        assertEquals(displayUserName, "Hello Test!")
    
}

Running Your Unit Tests

In order to run the unit tests, you need to make sure that Gradle is synchronized. In order to run a test, click on the green play icon in the IDE.




making sure that Gradle is synchronized

When the unit tests are run, successfully or otherwise, you should see this in the “Run” menu at the bottom of the screen:

result after unit tests are run
Large preview

You are done with your first unit test!

Writing Instrumentation Tests

Instrumentation tests are most suited for checking values of UI components when an activity is run. For instance, in the example above, we want to make sure that the TextView shows the correct user name after the Button is clicked. They run on physical devices and emulators and can take advantage of the Android framework APIs and supporting APIs, such as the Android Testing Support Library.
We’ll use Espresso to take actions on the main thread, such as button clicks and text changes.

Setting Up The Development Environment

Add a dependency on Espresso:

androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.1'

Instrumentation tests are created in an androidTest folder.

<Project Dir>/app/src/androidTest/java/com/maskaravivek/testingExamples

If you want to test a simple activity, create your test class in the same package as your activity.

Creating Your First Instrumentation Test

Let’s start by creating a simple activity that takes a name as input and, on the click of a button, displays the user name. The code for the activity above is quite simple:

class MainActivity : AppCompatActivity() 
    
    var button: Button? = null
    var userNameField: EditText? = null
    var displayUserName: TextView? = null
    
    override fun onCreate(savedInstanceState: Bundle?) 
        super.onCreate(savedInstanceState)
        AndroidInjection.inject(this)
        setContentView(R.layout.activity_main)
        initViews()
    
    
    private fun initViews() 
        button = this.findViewById(R.id.set_user_name)
        userNameField = this.findViewById(R.id.name_field)
        displayUserName = this.findViewById(R.id.display_user_name)
    
        this.button!!.setOnClickListener(
            displayUserName!!.text = "Hello $userNameField!!.text!"
        })
    }
}

To create a test for the MainActivity, we will start by creating a MainActivityTest class under the androidTest directory. Add the AndroidJUnit4 annotation to the class to indicate that the tests in this class will use the default Android test runner class.

@RunWith(AndroidJUnit4::class)
class MainActivityTest {}

Next, add an ActivityTestRule to the class. This rule provides functional testing of a single activity. For the duration of the test, you will be able to manipulate your activity directly using the reference obtained from getActivity().

@Rule @JvmField var activityActivityTestRule = ActivityTestRule(MainActivity::class.java)

Now that you are done setting up the test class, let’s add a test that verifies that the user name is displayed by clicking the “Set User Name” button.

@Test
fun setUserName() 
    onView(withId(R.id.name_field)).perform(typeText("Vivek Maskara"))
    onView(withId(R.id.set_user_name)).perform(click())
    onView(withText("Hello Vivek Maskara!")).check(matches(isDisplayed()))

The test above is quite simple to follow. It first simulates some text being typed in the EditText, performs the click action on the button, and then checks whether the correct text is displayed in the TextView.

The final test class looks like this:

@RunWith(AndroidJUnit4::class)
class MainActivityTest 
    
    @Rule @JvmField var activityActivityTestRule = ActivityTestRule(MainActivity::class.java)
    
    @Test
    fun setUserName() 
        onView(withId(R.id.name_field)).perform(typeText("Vivek Maskara"))
        onView(withId(R.id.set_user_name)).perform(click())
        onView(withText("Hello Vivek Maskara!")).check(matches(isDisplayed()))
    
}

Running Your Instrumentation Tests

Just like for unit tests, click on the green play button in the IDE to run the test.


clicking on the green play button in IDE to run the test


Large preview

Upon a click of the play button, the test version of the app will be installed on the emulator or device, and the test will run automatically on it.




Large preview

Intrumentation Testing Using Dagger, Mockito, And Espresso

Espresso is one of the most popular UI testing frameworks, with good documentation and community support. Mockito ensures that objects perform the actions that are expected of them. Mockito also works well with dependency-injection libraries such as Dagger. Mocking the dependencies allows us to test a scenario in isolation.
Until now, our MainActivity hasn’t used any dependency injection, and, as a result, we were able to write our UI test very easily. To make things a bit more interesting, let’s inject UserService in the MainActivity and use it to get the text to be displayed.

class MainActivity : AppCompatActivity() 
    
    var button: Button? = null
    var userNameField: EditText? = null
    var displayUserName: TextView? = null
    
    @Inject lateinit var userService: UserService
    
    override fun onCreate(savedInstanceState: Bundle?) 
        super.onCreate(savedInstanceState)
        AndroidInjection.inject(this)
        setContentView(R.layout.activity_main)
        initViews()
    
    
    private fun initViews() 
        button = this.findViewById(R.id.set_user_name)
        userNameField = this.findViewById(R.id.name_field)
        displayUserName = this.findViewById(R.id.display_user_name)
    
        this.button!!.setOnClickListener(
            displayUserName!!.text = userService.displayUserName(userNameField!!.text.toString())
        )
    }
}

With Dagger in the picture, we will have to set up a few things before we write instrumentation tests.
Imagine that the displayUserName function internally uses some API to fetch the details of the user. There should not be a situation in which a test does not pass due to a server fault. To avoid such a situation, we can use the dependency-injection framework Dagger and, for networking, Retrofit.

Setting Up Dagger In The Application

We will quickly set up the basic modules and components required for Dagger. If you are not
familiar with Dagger, check out Google’s documentation on it. We will start adding dependencies for using Dagger in the build.gradle file.

implementation "com.google.dagger:dagger-android:$DAGGER_VERSION"
implementation "com.google.dagger:dagger-android-support:$DAGGER_VERSION"
implementation "com.google.dagger:dagger:$DAGGER_VERSION"
kapt "com.google.dagger:dagger-compiler:$DAGGER_VERSION"
kapt "com.google.dagger:dagger-android-processor:$DAGGER_VERSION"

Create a component in the Application class, and add the necessary modules that will be used in our project. We need to inject dependencies in the MainActivity of our app. We will add a @Module for injecting in the activity.

@Module
abstract class ActivityBuilder 
    @ContributesAndroidInjector
    internal abstract fun bindMainActivity(): MainActivity

The AppModule class will provide the various dependencies required by the application. For our example, it will just provide an instance of Context and UserService.

@Module
open class AppModule(val application: Application) 
    @Provides
    @Singleton
    internal open fun provideContext(): Context 
        return application
    
    
    @Provides
    @Singleton
    internal open fun provideUserService(context: Context): UserService 
        return UserService(context)
    
}

The AppComponent class lets you build the object graph for the application.

@Singleton
@Component(modules = [(AndroidSupportInjectionModule::class), (AppModule::class), (ActivityBuilder::class)])
interface AppComponent 
    
    @Component.Builder
    interface Builder 
        fun appModule(appModule: AppModule): Builder
        fun build(): AppComponent
    
    
    fun inject(application: ExamplesApplication)
}

Create a method that returns the already built component, and then inject this component into onCreate().

open class ExamplesApplication : Application(), HasActivityInjector 
    @Inject lateinit var dispatchingActivityInjector: DispatchingAndroidInjector<Activity>
    
    override fun onCreate() 
        super.onCreate()
        initAppComponent().inject(this)
    
    
    open fun initAppComponent(): AppComponent 
        return DaggerAppComponent
            .builder()
            .appModule(AppModule(this))
            .build()
    
    
    override fun activityInjector(): DispatchingAndroidInjector<Activity>? 
        return dispatchingActivityInjector
    
}

Setting Up Dagger In The Test Application

In order to mock responses from the server, we need to create a new Application class that extends the class above.

class TestExamplesApplication : ExamplesApplication() 
    
    override fun initAppComponent(): AppComponent 
        return DaggerAppComponent.builder()
            .appModule(MockApplicationModule(this))
            .build()
    
    
    @Module
    private inner class MockApplicationModule internal constructor(application: Application) : AppModule(application) 
        override fun provideUserService(context: Context): UserService 
            val mock = Mockito.mock(UserService::class.java)
            `when`(mock!!.displayUserName("Test")).thenReturn("Hello Test!")
            return mock
        
    }
}

As you can see in the example above, we’ve used Mockito to mock UserService and assume the results. We still need a new runner that will point to the new application class with the overwritten data.

class MockTestRunner : AndroidJUnitRunner() 
    
    override fun onCreate(arguments: Bundle) 
        StrictMode.setThreadPolicy(StrictMode.ThreadPolicy.Builder().permitAll().build())
        super.onCreate(arguments)
    
    
    @Throws(InstantiationException::class, IllegalAccessException::class, ClassNotFoundException::class)
    override fun newApplication(cl: ClassLoader, className: String, context: Context): Application 
        return super.newApplication(cl, TestExamplesApplication::class.java.name, context)
    
}

Next, you need to update the build.gradle file to use the MockTestRunner.

android 
    ...
    
    defaultConfig 
        ...
        testInstrumentationRunner ".MockTestRunner"
    
}

Running The Test

All tests with the new TestExamplesApplication and MockTestRunner should be added at androidTest package. This implementation makes the tests fully independent from the server and gives us the ability to manipulate responses.
With the setup above in place, our test class won’t change at all. When the test is run, the app will use TestExamplesApplication instead of ExamplesApplication, and, thus, a mocked instance of UserService will be used.

@RunWith(AndroidJUnit4::class)
class MainActivityTest 
    @Rule @JvmField var activityActivityTestRule = ActivityTestRule(MainActivity::class.java)
    
    @Test
    fun setUserName() 
        onView(withId(R.id.name_field)).perform(typeText("Test"))
        onView(withId(R.id.set_user_name)).perform(click())
        onView(withText("Hello Test!")).check(matches(isDisplayed()))
    
}

The test will run successfully when you click on the green play button in the IDE.


successfully setting up Dagger and run tests using Espresso and Mockito


Large preview

That’s it! You have successfully set up Dagger and run tests using Espresso and Mockito.

Conclusion

We’ve highlighted that the most important aspect of improving code coverage is to write testable code. Frameworks such as Espresso and Mockito provide easy-to-use APIs that make writing tests for various scenarios easier. Tests should be run in isolation, and mocking the dependencies gives us an opportunity to ensure that objects perform the actions that are expected of them.

A variety of Android testing tools are available, and, as the ecosystem matures, the process of setting up a testable environment and writing tests will become easier.

Writing unit tests requires some discipline, concentration and extra effort. By creating and running unit tests against your code, you can easily verify that the logic of individual units is correct. Running unit tests after every build helps you to quickly catch and fix software regressions introduced by code changes to your app. Google’s testing blog discusses the advantages of unit testing.
The complete source code for the examples used in this article is available on GitHub. Feel free to take a look at it.

Smashing Editorial
(da, lf, ra, al, il)


Link:  

How To Improve Test Coverage For Your Android App Using Mockito And Espresso

Thumbnail

Tripwire Marketing: Lure in More Customers With 12 Slam-Dunk Tripwire Ideas

tripwire marketing

You’re unhappy with your conversion rate. People just aren’t buying what you’re selling. The solution might lie in tripwire marketing. The term tripwire marketing might sound a little shady, like you’re trying to get one over on your customer. That’s not the case at all. Marketing and advertising experts have been using tripwire marketing for decades in one form or another, and it works just as well online as it does in brick-and-mortar sales. In fact, it’s even more effective because you can more easily stay in touch with the customer. What is tripwire marketing? And how does it work?…

The post Tripwire Marketing: Lure in More Customers With 12 Slam-Dunk Tripwire Ideas appeared first on The Daily Egg.

Follow this link:  

Tripwire Marketing: Lure in More Customers With 12 Slam-Dunk Tripwire Ideas

Thumbnail

How to do A/B Testing and Improve Your Conversions Quickly

ab-testing-5

If you’re not A/B testing your content, you’re leaving money on the table. The only way to truly evaluate your conversion funnel and marketing campaign is to get data directly from your customers. From e-commerce to SaaS, guesswork won’t cut it. You need enough hard data to understand how your audience responds to specific elements on your website. But what is A/B testing and how does it work? That’s what I’m covering today. There’s a lot of information here, so feel free to skip around if there’s a specific section that interests you: What Is A/B Testing and How Does…

The post How to do A/B Testing and Improve Your Conversions Quickly appeared first on The Daily Egg.

View original article: 

How to do A/B Testing and Improve Your Conversions Quickly

Thumbnail

The 13 Most Effective Ways to Increase your Conversion Rate

increase conversion rate 2018

The average conversion rate for a Facebook Ad is 9.1 percent. Website conversion rates, though, lag at an average of just 2.35 percent. The data varies by industry. On Facebook, for instance, fitness ads top the charts at nearly 15 percent. You might be surprised to learn that B2B ads convert at an impressive 10.63 percent. But what if you don’t want to spend hundreds or thousands of dollars on Facebook Ads? You’d rather increase conversion rate on your own website through organic marketing. That’s certainly possible. I’ve done it myself. But you need some information before you can start…

The post The 13 Most Effective Ways to Increase your Conversion Rate appeared first on The Daily Egg.

Link: 

The 13 Most Effective Ways to Increase your Conversion Rate

3 Assumptions That Can Kill Conversions

It’s dangerous territory to make assumptions, over-generalize, or depend on logic or even so called “best practices” to make decisions about site changes. My team and I launched an e-commerce website a few years ago, and here are four ways we tried to break through common conversion pitfalls in order to ensure we increased our own conversions: Assumption #1 – All Of Your Ideas Are Great Ideas You’ve had these experiences countless times… you had a great idea for the site that was informed and re-enforced by “best practices.” You sold it to the team by explaining how your idea…

The post 3 Assumptions That Can Kill Conversions appeared first on The Daily Egg.

See original: 

3 Assumptions That Can Kill Conversions

Hypothesis Testing

hypothesis testing

Hypothesis Testing: A systematic way to select samples from a group or population with the intent of making a determination about the expected behavior of the entire group. Part of the field of inferential statistics, hypothesis testing is also known as significance testing, since significance (or lack of same) is usually the bar that determines whether or not the hypothesis is accepted. A hypothesis is similar to a theory If you believe something might be true but don’t yet have definitive proof, it is considered a theory until that proof is provided. Turning theories into accepted statements of fact is…

The post Hypothesis Testing appeared first on The Daily Egg.

Continue reading here: 

Hypothesis Testing

10 Simple Tips To Improve User Testing

(This is a sponsored post). Testing is a fundamental part of the UX designer’s job and a core part of the overall UX design process. Testing provides the inspiration, guidance and validation that product teams need in order to design great products. That’s why the most effective teams make testing a habit.
Usability testing involves observing users as they use a product. It helps you find where users struggle and what they like.

Excerpt from: 

10 Simple Tips To Improve User Testing

The Crazy Egg Guide to Landing Page Optimization

When it comes to increasing conversion rates, few strategies are more effective than the implementation of landing pages. Yet, these crucial linchpins to the optimization process are often rushed or overlooked completely in the grand scheme of marketing. Here at Crazy Egg, we believe it’s past time to give these hard-working pages a little more attention, which is why we’ve created this complete guide to landing page optimization. Even if you consider yourself a landing page pro, you’ll want to read this guide to make sure your pages are on track and converting as well as they should be. Why…

The post The Crazy Egg Guide to Landing Page Optimization appeared first on The Daily Egg.

Link to original: 

The Crazy Egg Guide to Landing Page Optimization

Thumbnail

[Case Study] Ecwid sees 21% lift in paid plan upgrades in one month

Reading Time: 2 minutes

What would you do with 21% more sales this month?

I bet you’d walk into your next meeting with your boss with an extra spring in your step, right?

Well, when you implement a strategic marketing optimization program, results like this are not only possible, they are probable.

In this new case study, you’ll discover how e-commerce software supplier, Ecwid, ran one experiment for four weeks, and saw a 21% increase in paid upgrades.

Get the full Ecwid case study now!

Download a PDF version of the Ecwid case study, featuring experiment details, supplementary takeaways and insights, and a testimonial from Ecwid’s Sr. Director, Digital Marketing.



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.

A little bit about Ecwid

Ecwid provides easy-to-use online store setup, management, and payment solutions. The company was founded in 2009, with the goal of enabling business-owners to add online stores to their existing websites, quickly and without hassle.

The company has a freemium business model: Users can sign up for free, and unlock more features as they upgrade to paid packages.

Ecwid’s partnership with WiderFunnel

In November 2016, Ecwid partnered with WiderFunnel with two primary goals:

  1. To increase initial signups for their free plan through marketing optimization, and
  2. To increase the rate of paid upgrades, through platform optimization

This case study focuses on a particular experiment cycle that ran on Ecwid’s step-by-step onboarding wizard.

The methodology

Last Winter, the WiderFunnel Strategy team did an initial LIFT Analysis of the onboarding wizard, and identified several potential barriers to conversion. (Both in terms of completing steps to setup a new store, and in terms of upgrading to a paid plan.)

The lead WiderFunnel Strategist for Ecwid, Dennis Pavlina, decided to create an A/B cluster test to 1) address the major barriers simultaneously, and 2) to get major lift for Ecwid, quickly.

The overarching goal was to make the onboarding process smoother. The WiderFunnel and Ecwid optimization teams hoped that enhancing the initial user experience, and exposing users to the wide range of Ecwid’s features, would result in more users upgrading to paid plans.

Dennis Pavlina

Ecwid’s two objectives ended up coming together in this test. We thought that if more new users interacted with the wizard and were shown the whole ‘Ecwid world’ with all the integrations and potential it has, they would be more open to upgrading. People needed to be able to see its potential before they would want to pay for it.

Dennis Pavlina, Optimization Strategist, WiderFunnel

The Results

This experiment ran for four weeks, at which point the variation was determined to be the winner with 98% confidence. The variation resulted in a 21.3% increase in successful paid account upgrades for Ecwid.

Read the full case study for:

  • The details on the initial barriers to conversion
  • How this test was structured
  • Which secondary metrics we tracked, and
  • The supplementary takeaways and customer insights that came from this test

The post [Case Study] Ecwid sees 21% lift in paid plan upgrades in one month appeared first on WiderFunnel Conversion Optimization.

See original article:

[Case Study] Ecwid sees 21% lift in paid plan upgrades in one month