Tag Archives: application

Thumbnail

How To Improve Test Coverage For Your Android App Using Mockito And Espresso




How To Improve Test Coverage For Your Android App Using Mockito And Espresso

Vivek Maskara



In app development, a variety of use cases and interactions come up as one iterates the code. The app might need to fetch data from a server, interact with the device’s sensors, access local storage or render complex user interfaces.

The important thing to consider while writing tests is the units of responsibility that emerge as you design the new feature. The unit test should cover all possible interactions with the unit, including standard interactions and exceptional scenarios.

In this article, we will cover the fundamentals of testing and frameworks such as Mockito and Espresso, which developers can use to write unit tests. I will also briefly discuss how to write testable code. I’ll also explain how to get started with local and instrumented tests in Android.

Recommended reading: How To Set Up An Automated Testing System Using Android Phones (A Case Study)

Fundamentals Of Testing

A typical unit test contains three phases.

  1. First, the unit test initializes a small piece of an application it wants to test.
  2. Then, it applies some stimulus to the system under test, usually by calling a method on it.
  3. Finally, it observes the resulting behavior.

If the observed behavior is consistent with the expectations, the unit test passes; otherwise, it fails, indicating that there is a problem somewhere in the system under test. These three unit test phases are also known as arrange, act and assert, or simply AAA. The app should ideally include three categories of tests: small, medium and large.

  • Small tests comprise unit tests that mock every major component and run quickly in isolation.
  • Medium tests are integration tests that integrate several components and run on emulators or real devices.
  • Large tests are integration and UI tests that run by completing a UI workflow and ensure that the key end-user tasks work as expected.

Note: An instrumentation test is a type of integration test. These are tests that run on an Android device or emulator. These tests have access to instrumentation information, such as the context of the app under test. Use this approach to run unit tests that have Android dependencies that mock objects cannot easily satisfy.

Writing small tests allows you to address failures quickly, but it’s difficult to gain confidence that a passing test will allow your app to work. It’s important to have tests from all categories in the app, although the proportion of each category can vary from app to app. A good unit test should be easy to write, readable, reliable and fast.

Here’s a brief introduction to Mockito and Espresso, which make testing Android apps easier.

Mockito

There are various mocking frameworks, but the most popular of them all is Mockito:

Mockito is a mocking framework that tastes really good. It lets you write beautiful tests with a clean & simple API. Mockito doesn’t give you hangover because the tests are very readable and they produce clean verification errors.

Its fluent API separates pre-test preparation from post-test validation. Should the test fail, Mockito makes it clear to see where our expectations differ from reality! The library has everything you need to write complete tests.

Espresso

Espresso helps you write concise, beautiful and reliable Android UI tests.

The code snippet below shows an example of an Espresso test. We will take up the same example later in this tutorial when we talk in detail about instrumentation tests.

@Test
public void setUserName() 
    onView(withId(R.id.name_field)).perform(typeText("Vivek Maskara"));
    onView(withId(R.id.set_user_name)).perform(click());
    onView(withText("Hello Vivek Maskara!")).check(matches(isDisplayed()));

Espresso tests state expectations, interactions and assertions clearly, without the distraction of boilerplate content, custom infrastructure or messy implementation details getting in the way. Whenever your test invokes onView(), Espresso waits to perform the corresponding UI action or assertion until the synchronization conditions are met, meaning:

  • the message queue is empty,
  • no instances of AsyncTask are currently executing a task,
  • the idling resources are idle.

These checks ensure that the test results are reliable.

Writing Testable Code

Unit testing Android apps is difficult and sometimes impossible. A good design, and only a good design, can make unit testing easier. Here are some of the concepts that are important for writing testable code.

Avoid Mixing Object Graph Construction With Application Logic

In a test, you want to instantiate the class under test and apply some stimulus to the class and assert that the expected behavior was observed. Make sure that the class under test doesn’t instantiate other objects and that those objects do not instantiate more objects and so on. In order to have a testable code base, your application should have two kinds of classes:

  • The factories, which are full of the “new” operators and which are responsible for building the object graph of your application;
  • The application logic classes, which are devoid of the “new” operator and which are responsible for doing the work.

Constructors Should Not Do Any Work

The most common operation you will do in tests is the instantiation of object graphs. So, make it easy on yourself, and make the constructors do no work other than assigning all of the dependencies into the fields. Doing work in the constructor not only will affect the direct tests of the class, but will also affect related tests that try to instantiate your class indirectly.

Avoid Static Methods Wherever Possible

The key to testing is the presence of places where you can divert the normal execution flow. Seams are needed so that you can isolate the unit of test. If you build an application with nothing but static methods, you will have a procedural application. How much a static method will hurt from a testing point of view depends on where it is in your application call graph. A leaf method such as Math.abs() is not a problem because the execution call graph ends there. But if you pick a method in a core of your application logic, then everything behind the method will become hard to test, because there is no way to insert test doubles

Avoid Mixing Of Concerns

A class should be responsible for dealing with just one entity. Inside a class, a method should be responsible for doing just one thing. For example, BusinessService should be responsible just for talking to a Business and not BusinessReceipts. Moreover, a method in BusinessService could be getBusinessProfile, but a method such as createAndGetBusinessProfile would not be ideal for testing. SOLID design principles must be followed for good design:

  • S: single-responsibility principle;
  • O: open-closed principle;
  • L: Liskov substitution principle;
  • I: interface segregation principle;
  • D: dependency inversion principle.

In the next few sections, we will be using examples from a really simple application that I built for this tutorial. The app has an EditText that takes a user name as input and displays the name in a TextView upon the click of a button. Feel free to take the complete source code for the project from GitHub. Here’s a screenshot of the app:


Testing example


Large preview

Writing Local Unit Tests

Unit tests can be run locally on your development machine without a device or an emulator. This testing approach is efficient because it avoids the overhead of having to load the target app and unit test code onto a physical device or emulator every time your test is run. In addition to Mockito, you will also need to configure the testing dependencies for your project to use the standard APIs provided by the JUnit 4 framework.

Setting Up The Development Environment

Start by adding a dependency on JUnit4 in your project. The dependency is of the type testImplementation, which means that the dependencies are only required to compile the test source of the project.

testImplementation 'junit:junit:4.12'

We will also need the Mockito library to make interaction with Android dependencies easier.

testImplementation "org.mockito:mockito-core:$MOCKITO_VERSION"

Make sure to sync the project after adding the dependency. Android Studio should have created the folder structure for unit tests by default. If not, make sure the following directory structure exists:

<Project Dir>/app/src/test/java/com/maskaravivek/testingExamples

Creating Your First Unit Test

Suppose you want to test the displayUserName function in the UserService. For the sake of simplicity, the function simply formats the input and returns it back. In a real-world application, it could make a network call to fetch the user profile and return the user’s name.

@Singleton
class UserService @Inject
constructor(private var context: Context) 
    
    fun displayUserName(name: String): String 
        val userNameFormat = context.getString(R.string.display_user_name)
        return String.format(Locale.ENGLISH, userNameFormat, name)
    
}

We will start by creating a UserServiceTest class in our test directory. The UserService class uses Context, which needs to be mocked for the purpose of testing. Mockito provides a @Mock notation for mocking objects, which can be used as follows:

@Mock internal var context: Context? = null

Similarly, you’ll need to mock all dependencies required to construct the instance of the UserService class. Before your test, you’ll need to initialize these mocks and inject them into the UserService class.

  • @InjectMock creates an instance of the class and injects the mocks that are marked with the annotations @Mock into it.
  • MockitoAnnotations.initMocks(this); initializes those fields annotated with Mockito annotations.

Here’s how it can be done:

class UserServiceTest 
    
    @Mock internal var context: Context? = null
    @InjectMocks internal var userService: UserService? = null
    
    @Before
    fun setup() 
        MockitoAnnotations.initMocks(this)
    
}

Now you are done setting up your test class. Let’s add a test to this class that verifies the functionality of the displayUserName function. Here’s what the test looks like:

@Test
fun displayUserName() 
    doReturn("Hello %s!").`when`(context)!!.getString(any(Int::class.java))
    val displayUserName = userService!!.displayUserName("Test")
    assertEquals(displayUserName, "Hello Test!")

The test uses a doReturn().when() statement to provide a response when a context.getString() is invoked. For any input integer, it will return the same result, "Hello %s!". We could have been more specific by making it return this response only for a particular string resource ID, but for the sake of simplicity, we are returning the same response to any input.
Finally, here’s what the test class looks like:

class UserServiceTest 
    @Mock internal var context: Context? = null
    @InjectMocks internal var userService: UserService? = null
    @Before
    fun setup() 
        MockitoAnnotations.initMocks(this)
    
     
    @Test
    fun displayUserName() 
        doReturn("Hello %s!").`when`(context)!!.getString(any(Int::class.java))
        val displayUserName = userService!!.displayUserName("Test")
        assertEquals(displayUserName, "Hello Test!")
    
}

Running Your Unit Tests

In order to run the unit tests, you need to make sure that Gradle is synchronized. In order to run a test, click on the green play icon in the IDE.




making sure that Gradle is synchronized

When the unit tests are run, successfully or otherwise, you should see this in the “Run” menu at the bottom of the screen:

result after unit tests are run
Large preview

You are done with your first unit test!

Writing Instrumentation Tests

Instrumentation tests are most suited for checking values of UI components when an activity is run. For instance, in the example above, we want to make sure that the TextView shows the correct user name after the Button is clicked. They run on physical devices and emulators and can take advantage of the Android framework APIs and supporting APIs, such as the Android Testing Support Library.
We’ll use Espresso to take actions on the main thread, such as button clicks and text changes.

Setting Up The Development Environment

Add a dependency on Espresso:

androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.1'

Instrumentation tests are created in an androidTest folder.

<Project Dir>/app/src/androidTest/java/com/maskaravivek/testingExamples

If you want to test a simple activity, create your test class in the same package as your activity.

Creating Your First Instrumentation Test

Let’s start by creating a simple activity that takes a name as input and, on the click of a button, displays the user name. The code for the activity above is quite simple:

class MainActivity : AppCompatActivity() 
    
    var button: Button? = null
    var userNameField: EditText? = null
    var displayUserName: TextView? = null
    
    override fun onCreate(savedInstanceState: Bundle?) 
        super.onCreate(savedInstanceState)
        AndroidInjection.inject(this)
        setContentView(R.layout.activity_main)
        initViews()
    
    
    private fun initViews() 
        button = this.findViewById(R.id.set_user_name)
        userNameField = this.findViewById(R.id.name_field)
        displayUserName = this.findViewById(R.id.display_user_name)
    
        this.button!!.setOnClickListener(
            displayUserName!!.text = "Hello $userNameField!!.text!"
        })
    }
}

To create a test for the MainActivity, we will start by creating a MainActivityTest class under the androidTest directory. Add the AndroidJUnit4 annotation to the class to indicate that the tests in this class will use the default Android test runner class.

@RunWith(AndroidJUnit4::class)
class MainActivityTest {}

Next, add an ActivityTestRule to the class. This rule provides functional testing of a single activity. For the duration of the test, you will be able to manipulate your activity directly using the reference obtained from getActivity().

@Rule @JvmField var activityActivityTestRule = ActivityTestRule(MainActivity::class.java)

Now that you are done setting up the test class, let’s add a test that verifies that the user name is displayed by clicking the “Set User Name” button.

@Test
fun setUserName() 
    onView(withId(R.id.name_field)).perform(typeText("Vivek Maskara"))
    onView(withId(R.id.set_user_name)).perform(click())
    onView(withText("Hello Vivek Maskara!")).check(matches(isDisplayed()))

The test above is quite simple to follow. It first simulates some text being typed in the EditText, performs the click action on the button, and then checks whether the correct text is displayed in the TextView.

The final test class looks like this:

@RunWith(AndroidJUnit4::class)
class MainActivityTest 
    
    @Rule @JvmField var activityActivityTestRule = ActivityTestRule(MainActivity::class.java)
    
    @Test
    fun setUserName() 
        onView(withId(R.id.name_field)).perform(typeText("Vivek Maskara"))
        onView(withId(R.id.set_user_name)).perform(click())
        onView(withText("Hello Vivek Maskara!")).check(matches(isDisplayed()))
    
}

Running Your Instrumentation Tests

Just like for unit tests, click on the green play button in the IDE to run the test.


clicking on the green play button in IDE to run the test


Large preview

Upon a click of the play button, the test version of the app will be installed on the emulator or device, and the test will run automatically on it.




Large preview

Intrumentation Testing Using Dagger, Mockito, And Espresso

Espresso is one of the most popular UI testing frameworks, with good documentation and community support. Mockito ensures that objects perform the actions that are expected of them. Mockito also works well with dependency-injection libraries such as Dagger. Mocking the dependencies allows us to test a scenario in isolation.
Until now, our MainActivity hasn’t used any dependency injection, and, as a result, we were able to write our UI test very easily. To make things a bit more interesting, let’s inject UserService in the MainActivity and use it to get the text to be displayed.

class MainActivity : AppCompatActivity() 
    
    var button: Button? = null
    var userNameField: EditText? = null
    var displayUserName: TextView? = null
    
    @Inject lateinit var userService: UserService
    
    override fun onCreate(savedInstanceState: Bundle?) 
        super.onCreate(savedInstanceState)
        AndroidInjection.inject(this)
        setContentView(R.layout.activity_main)
        initViews()
    
    
    private fun initViews() 
        button = this.findViewById(R.id.set_user_name)
        userNameField = this.findViewById(R.id.name_field)
        displayUserName = this.findViewById(R.id.display_user_name)
    
        this.button!!.setOnClickListener(
            displayUserName!!.text = userService.displayUserName(userNameField!!.text.toString())
        )
    }
}

With Dagger in the picture, we will have to set up a few things before we write instrumentation tests.
Imagine that the displayUserName function internally uses some API to fetch the details of the user. There should not be a situation in which a test does not pass due to a server fault. To avoid such a situation, we can use the dependency-injection framework Dagger and, for networking, Retrofit.

Setting Up Dagger In The Application

We will quickly set up the basic modules and components required for Dagger. If you are not
familiar with Dagger, check out Google’s documentation on it. We will start adding dependencies for using Dagger in the build.gradle file.

implementation "com.google.dagger:dagger-android:$DAGGER_VERSION"
implementation "com.google.dagger:dagger-android-support:$DAGGER_VERSION"
implementation "com.google.dagger:dagger:$DAGGER_VERSION"
kapt "com.google.dagger:dagger-compiler:$DAGGER_VERSION"
kapt "com.google.dagger:dagger-android-processor:$DAGGER_VERSION"

Create a component in the Application class, and add the necessary modules that will be used in our project. We need to inject dependencies in the MainActivity of our app. We will add a @Module for injecting in the activity.

@Module
abstract class ActivityBuilder 
    @ContributesAndroidInjector
    internal abstract fun bindMainActivity(): MainActivity

The AppModule class will provide the various dependencies required by the application. For our example, it will just provide an instance of Context and UserService.

@Module
open class AppModule(val application: Application) 
    @Provides
    @Singleton
    internal open fun provideContext(): Context 
        return application
    
    
    @Provides
    @Singleton
    internal open fun provideUserService(context: Context): UserService 
        return UserService(context)
    
}

The AppComponent class lets you build the object graph for the application.

@Singleton
@Component(modules = [(AndroidSupportInjectionModule::class), (AppModule::class), (ActivityBuilder::class)])
interface AppComponent 
    
    @Component.Builder
    interface Builder 
        fun appModule(appModule: AppModule): Builder
        fun build(): AppComponent
    
    
    fun inject(application: ExamplesApplication)
}

Create a method that returns the already built component, and then inject this component into onCreate().

open class ExamplesApplication : Application(), HasActivityInjector 
    @Inject lateinit var dispatchingActivityInjector: DispatchingAndroidInjector<Activity>
    
    override fun onCreate() 
        super.onCreate()
        initAppComponent().inject(this)
    
    
    open fun initAppComponent(): AppComponent 
        return DaggerAppComponent
            .builder()
            .appModule(AppModule(this))
            .build()
    
    
    override fun activityInjector(): DispatchingAndroidInjector<Activity>? 
        return dispatchingActivityInjector
    
}

Setting Up Dagger In The Test Application

In order to mock responses from the server, we need to create a new Application class that extends the class above.

class TestExamplesApplication : ExamplesApplication() 
    
    override fun initAppComponent(): AppComponent 
        return DaggerAppComponent.builder()
            .appModule(MockApplicationModule(this))
            .build()
    
    
    @Module
    private inner class MockApplicationModule internal constructor(application: Application) : AppModule(application) 
        override fun provideUserService(context: Context): UserService 
            val mock = Mockito.mock(UserService::class.java)
            `when`(mock!!.displayUserName("Test")).thenReturn("Hello Test!")
            return mock
        
    }
}

As you can see in the example above, we’ve used Mockito to mock UserService and assume the results. We still need a new runner that will point to the new application class with the overwritten data.

class MockTestRunner : AndroidJUnitRunner() 
    
    override fun onCreate(arguments: Bundle) 
        StrictMode.setThreadPolicy(StrictMode.ThreadPolicy.Builder().permitAll().build())
        super.onCreate(arguments)
    
    
    @Throws(InstantiationException::class, IllegalAccessException::class, ClassNotFoundException::class)
    override fun newApplication(cl: ClassLoader, className: String, context: Context): Application 
        return super.newApplication(cl, TestExamplesApplication::class.java.name, context)
    
}

Next, you need to update the build.gradle file to use the MockTestRunner.

android 
    ...
    
    defaultConfig 
        ...
        testInstrumentationRunner ".MockTestRunner"
    
}

Running The Test

All tests with the new TestExamplesApplication and MockTestRunner should be added at androidTest package. This implementation makes the tests fully independent from the server and gives us the ability to manipulate responses.
With the setup above in place, our test class won’t change at all. When the test is run, the app will use TestExamplesApplication instead of ExamplesApplication, and, thus, a mocked instance of UserService will be used.

@RunWith(AndroidJUnit4::class)
class MainActivityTest 
    @Rule @JvmField var activityActivityTestRule = ActivityTestRule(MainActivity::class.java)
    
    @Test
    fun setUserName() 
        onView(withId(R.id.name_field)).perform(typeText("Test"))
        onView(withId(R.id.set_user_name)).perform(click())
        onView(withText("Hello Test!")).check(matches(isDisplayed()))
    
}

The test will run successfully when you click on the green play button in the IDE.


successfully setting up Dagger and run tests using Espresso and Mockito


Large preview

That’s it! You have successfully set up Dagger and run tests using Espresso and Mockito.

Conclusion

We’ve highlighted that the most important aspect of improving code coverage is to write testable code. Frameworks such as Espresso and Mockito provide easy-to-use APIs that make writing tests for various scenarios easier. Tests should be run in isolation, and mocking the dependencies gives us an opportunity to ensure that objects perform the actions that are expected of them.

A variety of Android testing tools are available, and, as the ecosystem matures, the process of setting up a testable environment and writing tests will become easier.

Writing unit tests requires some discipline, concentration and extra effort. By creating and running unit tests against your code, you can easily verify that the logic of individual units is correct. Running unit tests after every build helps you to quickly catch and fix software regressions introduced by code changes to your app. Google’s testing blog discusses the advantages of unit testing.
The complete source code for the examples used in this article is available on GitHub. Feel free to take a look at it.

Smashing Editorial
(da, lf, ra, al, il)


Link:  

How To Improve Test Coverage For Your Android App Using Mockito And Espresso

Why Web Application Maintenance Should Be More Of A Thing

Traditional software developers have been hiding a secret from us in plain sight. It’s not even a disputed fact. It’s part of their business model.

It doesn’t matter if we’re talking about high-end enterprise software vendors or smaller software houses that write the tools that we all use day to day in our jobs or businesses. It’s right there front and center. Additional costs that they don’t hide and that we’ve become accustomed paying.

So what is this secret?

Well, a lot of traditional software vendors make more money from maintaining the software that they write than they do in the initial sale.

Not convinced?

A quick search on the term “Total Cost of Ownership” will provide you with lots of similar definitions like this one from Gartner (emphasis mine):

[TCO is] the cost to implement, operate, support & maintain or extend, and decommission an application.

Furthermore, this paper by Stanford university asserts that maintenance normally amounts to 60% to 90% of the TCO of a software product.

It’s worth letting that sink in for a minute. They make well over the initial purchase price by selling ongoing support and maintenance plans.

We Don’t Push Maintenance

The problem as I see it is that in the web development industry, web application maintenance isn’t something that we focus on. We might put it in our proposals because we like the idea of a monthly retainer, but they will likely cover simple housekeeping tasks or new feature requests.

It is not unheard of to hide essential upgrades and optimizations within our quotes for later iterations because we‘re not confident that the client will want to pay for the things that we see as essential improvements. We try and get them in through the back door. Or in other words, we are not open and transparent that, just like more traditional software, these applications need maintaining.

Regardless of the reasons why, it is becoming clear that we are storing up problems for the future. The software applications we’re building are here for the long-term. We need to be thinking like traditional software vendors. Our software will still be running for 10 or 15 years from now, and it should be kept well maintained.

So, how can we change this? How do we all as an industry ensure that our clients are protected so that things stay secure and up to date? Equally, how do we get to take a share of the maintenance pie?

What Is Maintenance?

In their 2012 paper Effective Application Maintenance, Heather Smith and James McKeen define maintenance as (emphasis is mine):

Porting an application to a new server, interfacing with a different operating system, upgrading to a newer release, altering a tax table, or complying with new regulations—all necessitate application — maintenance. As a result, maintenance is focused on upgrading an application to ensure it remains productive and/or cost effective. The definition of application maintenance preferred by the focus group is — any modification of an application to correct faults; to improve performance; or to adapt the application to a changed environment or changed requirements. Thus, adding new functionality to an existing application (i.e., enhancement) is not, strictly speaking, considered maintenance.

In other words, maintenance is essential work that needs to be carried out on a software application so it can continue to reliably and securely function.

It is not adding new features. It is not checking log files or ensuring backups have ran (these are housekeeping tasks). It is working on the code and the underlying platform to ensure that things are up to date, that it performs as its users would expect and that the lights stay on.

Here are a few examples:

  • Technology and Platform Changes
    Third-party libraries need updating. The underlying language requires an update, e.g. PHP 5.6 to PHP 7.1 Modern operating systems send out updates regularly. Keeping on top of this is maintenance and at times will also require changes to the code base as the old ways of doing certain things become deprecated.

  • Scaling
    As the application grows, there will be resource issues. Routines within the code that worked fine with 10,000 transactions per day struggle with 10,000 per hour. The application needs to be monitored, but also action needs to be taken when alerts are triggered.

  • Bug Fixing
    Obvious but worth making explicit. The software has bugs, and they need fixing. Even if you include a small period of free bug fixes after shipping a project, at some point the client will need to start paying for these.

Hard To Sell?

Interestingly, when I discuss this with my peers, they feel that it is difficult to convince clients that they need maintenance. They are concerned that their clients don’t have the budget and they don’t want to come across as too expensive.

Well, here’s the thing: it’s actually a pretty easy sell. We’re dealing with business people, and we simply need to be talking to them about maintenance in commercial terms. Business people understand that assets require maintenance or they’ll become liabilities. It’s just another standard ongoing monthly overhead. A cost of doing business. We just need to be putting this in our proposals and making sure that we follow up on it.

An extremely effective method is to offer a retainer that incorporates maintenance at its core but also bundles a lot of extra value for the client, things like:

  • Reporting on progress vs. KPIs (e.g. traffic, conversions, search volumes)
  • Limited ‘free’ time each month for small tweaks to the site
  • Reporting on downtime, server updates or development work completed
  • Access to you or specific members of your team by phone to answer questions

Indeed, you can make the retainer save the client money and pay for itself. A good example of this would be a client’s requirement to get a simple report or export from the database each month for offline processing.

You could quote for a number of development days to build out a — probably more complex than initially assumed — reporting user interface or alternatively point the client to your retainer. Include within it a task each month for a developer to manually run a pre-set SQL query to manually provide the same data.

A trivial task for you or your team; lots of value to your client.

A Practical Example

You’ll, of course, have your own way of writing proposals but here are a couple of snippets from an example pitch.

In the section of your proposal where you might paint your vision for the future, you can add something about maintenance. Use this as an opportunity to plant the seed about forming a long-term relationship.

You are looking to minimize long-term risk.

You want to ensure that your application performs well, that it remains secure and that it is easy to work on.

You also understand how important maintenance is for any business asset.

Later on, in the deliverables section, you can add a part about maintenance either as a stand-alone option or bundled in with an ongoing retainer.

In the following example, we keep it simple and bundle it in with a pre-paid development retainer:

We strongly advocate that all clients consider maintenance to be an essential overhead for their website. Modern web applications require maintenance and just like your house or your car; you keep your asset maintained to reduce the tangible risk that they become liabilities later on.

As a client who is sensibly keen to keep on top of the application’s maintenance as well as getting new features added, we’d suggest N days per month (as a starting point) for general maintenance and development retainer.

We’d spread things out so that a developer is working on your system at least [some period per week/month] giving you the distinct advantage of having a developer able to switch to something more important should issues arise during the [same period]. Depending upon your priorities that time could all be spent on new feature work or divided with maintenance, it’s your call. We normally suggest a 75%/25% split between new features and important maintenance.

As previously mentioned, this is also a great opportunity to lump maintenance in with other value-added ongoing services like performance reporting, conducting housekeeping tasks like checking backups and maybe a monthly call to discuss progress and priorities.

What you’ll probably find is that after you land the work, the retainer is then not mentioned again. This is understandable as there is lots for you and your client to be considering at the beginning of a project, but as the project is wrapping up is a great time to re-introduce it as part of your project offboarding process.

Whether this is talking about phase 2 or simply introducing final invoices and handing over, remind them about maintenance. Remind them of ongoing training, reporting, and being available for support. Make the push for a retainer, remembering to talk in those same commercial terms: their new asset needs maintaining to stay shiny.

Can Maintenance Be Annoying?

A common misconception is that maintenance retainers can become an additional burden. The concern is that clients will be constantly ringing you up and asking for small tweaks as part of your retainer. This is a particular concern for smaller teams or solo consultants.

It is not usually the case, though. Maybe at the beginning, the client will have a list of snags that need working through, but this is par for the course; if you’re experienced, then you’re expecting it. These are easily managed by improving communication channels (use an issue tracker) and lumping all requests together, i.e, working on them in a single hit.

As the application matures, you’ll drop into a tick-over mode. This is where the retainer becomes particularly valuable to both parties. It obviously depends on how you’ve structured the retainer but from your perspective, you are striving to remind the client each month how valuable you are. You can send them your monthly report, tell them how you fixed a slowdown in that routine and that the server was patched for this week’s global OS exploit.

You were, of course, also available to work on a number of new requested features that were additionally chargeable. From your client’s perspective, they see that you are there, they see progress, and they get to remove “worry about the website” from their list. Clearly, ‘those clients’ do exist, though, so the most important thing is to get your retainer wording right and manage expectations accordingly.

If your client is expecting the moon on the stick for a low monthly fee, push back or renegotiate. Paying you to do — say — two hours maintenance and housekeeping per month in amongst providing a monthly report and other ancillary tasks is exactly that; it’s not a blank cheque to make lots of ad-hoc changes. Remind them what is included and what isn’t.

How Do We Make Maintenance Easier?

Finally, to ensure the best value for your clients and to make your life easier, use some of these tactics when building your applications.

Long-Term Support (LTS)

  • Use technology platforms with well documented LTS releases and upgrade paths.
  • Ongoing OS, language, framework and CMS upgrades should be expected and factored in for all projects so tracking an LTS version is a no-brainer.
  • Everything should be running on a supported version. Big alarm bells should be ringing if this is not the case.

Good Project Hygiene

  • Have maintenance tasks publicly in your feature backlog or issue tracking system and agree on priorities with your client. Don’t hide the maintenance tasks away.
  • Code level and functional tests allow you to keep an eye on particularly problematic code and will help when pulling modules out for refactoring.
  • Monitor the application and understand where the bottlenecks and errors are. Any issues can get added to the development backlog and prioritized accordingly.
  • Monitor support requests. Are end users providing you with useful feedback that could indicate maintenance requirements?

The Application Should Be Portable

  • Any developer should be able to get the system up and running easily locally — not just you! Use virtual servers or containers to ensure that development versions of the applications are identical to production.
  • The application should be well documented. At a minimum, the provisioning and deployment workflows and any special incantations required to deploy to live should be written down.

Maintenance Is A Genuine Win-Win

Maintenance is the work we need to do on an application so it can safely stand still. It is a standard business cost. On average 75% of the total cost of ownership over a software application’s lifetime.

As professionals, we have a duty of care to be educating our clients about maintenance from the outset. There is a huge opportunity here for additional income while providing tangible value to your clients. You get to keep an ongoing commercial relationship and will be the first person they turn to when they have new requirements.

Continuing to provide value through your retainer will build up trust with the client. You’ll get a platform to suggest enhancements or new features. Work that you have a great chance of winning. Your client reduces their lifetime costs, they reduce their risk, and they get to stop worrying about performance or security.

Do yourself, your client and our entire industry a favor: help make web application maintenance become more of a thing.

Smashing Editorial
(rb, ra, hj, il)

View this article: 

Why Web Application Maintenance Should Be More Of A Thing

Learning Elm From A Drum Sequencer (Part 1)

If you’re a front-end developer following the evolution of single page applications (SPA), it’s likely you’ve heard of Elm, the functional language that inspired Redux. If you haven’t, it’s a compile-to-JavaScript language comparable with SPA projects like React, Angular, and Vue.
Like those, it manages state changes through its virtual dom aiming to make the code more maintainable and performant. It focuses on developer happiness, high-quality tooling, and simple, repeatable patterns.

See the article here: 

Learning Elm From A Drum Sequencer (Part 1)

Using Slack To Monitor Your App

For the past few months, I’ve been building a software-as-a-service (SaaS) application, and throughout the development process I’ve realized what a powerful tool Slack (or team chat in general) can be to monitor user and application behavior. After a bit of integration, it’s provided a real-time view into our application that previously didn’t exist, and it’s been so invaluable that I couldn’t help but write up this show-and-tell.
It all started with a visit to a small startup in Denver, Colorado.

View this article – 

Using Slack To Monitor Your App

Finding Better Mobile Analytics

When creating a mobile application, a developer imagines a model and the way users will use the application. One problem that developers face is that users do not always use an app the way it was envisaged by the developer.

Finding Better Mobile Analytics

How do users interact with the app? What do they do in the app? Do they do what the developer wants them to do? Mobile analytics help to answer these questions. Analytics allow the developer to understand what happens with the app in real life and provide an opportunity to adjust and improve the app after seeing how users actually use it. To put it simply, analytics is the study of user behavior.

The post Finding Better Mobile Analytics appeared first on Smashing Magazine.

This article: 

Finding Better Mobile Analytics

Thumbnail

React Server Side Rendering With Node And Express

Web applications are everywhere. There is no official definition, but we’ve made the distinction: web applications are highly interactive, dynamic and performant, while websites are informational and less transient. This very rough categorization provides us with a starting point, from which to apply development and design patterns.
These patterns are often established through a different look at the mainstream techniques, a paradigm shift, convergence with an external concept, or just a better implementation.

More:  

React Server Side Rendering With Node And Express

High-Impact, Minimal-Effort Cross-Browser Testing

Cross-browser testing is time-consuming and laborious. However, developers are lazy by nature: adhering to the DRY principle, writing scripts to automate things we’d otherwise have to do by hand, making use of third-party libraries; being lazy is what makes us good developers.
The traditional approach to cross-browser testing ‘properly’ is to test in all of the major browsers used by your audience, gradually moving onto the older or more obscure browsers in order to say you’ve tested them.

Originally from – 

High-Impact, Minimal-Effort Cross-Browser Testing

3 Ways Tinkoff Bank Optimized Credit Card Conversions – Case Study

Conversion Rate Optimization (CRO) is a process-oriented practice, which essentially aims at enhancing user experience on a website.

It starts with proactively recognizing challenges faced by users across a conversion funnel, and addressing them through various tools and techniques.

Tinkoff Bank understands the need for a process-oriented approach to CRO and puts it into practice.

The following case study tells us more about Tinkoff’s CRO methodology — and how it delivers incredible results.

About the Client

Tinkoff Bank is a major online financial services provider in Russia, which was launched in 2006 by Oleg Tinkov. In just a small duration, the bank has grown into a leader in credit cards — becoming one of the top four credit card issuers in Russia.

Notably, the bank was named Russia’s Best Consumer Digital Bank in 2015 by Global Finance.

Tinkoff operates through a branch-less digital platform, and relies a lot on its website for finding new customers. Like any other smart business, the bank constantly explores new ways to improve its website’s conversion rate. For this job, Tinkoff has a dedicated web analytics team that plans and executes CRO strategies on the website.

Context

Tinkoff Bank lets users apply for a credit card through an application form on its website. Users can fill up the application form, and submit it for approval from the bank. Once the application is approved, users receive their credit card at their homes — with zero shipment cost.

This is the original application page:

Tinkoff's Application Page

The application page on the website is fairly elaborate, consisting of a multi-step form and details about the application process and the credit card plan. This page is where conversions (form-submits) happen for Tinkoff.

Since the form involves multiple steps for completion, Tinkoff tracks submits for each step of the form along with submits for the complete form. Tinkoff refers to these conversions as short-application submits and long-application submits, respectively.

The ultimate goal for Tinkoff is to increase these conversions.

The Case

The CRO team at Tinkoff was working on improving their website’s usability to get higher conversions. It began with identifying key pages on the website that could be optimized. For this purpose, the team analyzed the website’s user data with Adobe Site Catalyst. It found that the credit-card application page had a significant bounce rate.

Next, the team planned on ways to help users stay on the application page and complete the conversion. They zeroed in on three areas of the web page, where they could introduce new features. The hypothesis was that these new features will improve user experience on the page.

However, the team needed to be absolutely sure about the effectiveness of these new features before applying changes to the web page permanently. There was only one way to do it — through A/B testing!

Tinkoff used VWO to carry out A/B tests on the page, and determine whether it was beneficial to introduce new functions there.

Let’s look at the tests closely.

TEST #1: Providing an Additional Information Box

The Hypothesis

By offering additional details about the credit card above the form, the number of sign-ups will increase.

The Test

Tinkoff created two variations of the original (control) page.

The first variation included a “More details” hyperlink underneath the “Fill out form” CTA button placed above the fold. When clicked, the hyperlink led to a new page which provided additional information about the credit card scheme.

Here is how it looked.

2

The second variation had the same “More details” link below the CTA button. But this time, the link opened up a box right below. The box provided additional information — through text and graphics — about the credit card.

Here’s the second variation.

3

The test was run on more than 60,000 visitors for a period of 13 days.

The Result

The first variation couldn’t outperform the control. It had an even lower conversion rate than the control.

The second variation, however, won against the control, and improved the page’s conversion rate by a handsome 15.5%. Moreover, it had a 100% chance of beating the control.

The Analysis

Displaying Key Differentiators:

Placing key differentiators — factors that make one superior than its competitors — on a web page prominently is one of the leading best practices in CRO. The key differentiators enhance the image of a brand in users’ eyes, which influences them to make a conversion.

Tinkoff, too, wanted to place its differentiators on the application form page. In order to not clutter the page, Tinkoff decided to display these differentiators within a box, behind the “More details” link.

The box clearly illustrated Tinkoff’s key differentiators such as free shipping of the card, free card recharge, and cashback on all purchases made through the card.

Related Post: Optimize Your Marketing Efforts with a Killer Value Proposition

Emphasizing on Free Shipping:

By now, we all know how free shipping influences the minds of the customers. In fact, lack of free shipping is the number one reason why people abandon their shopping carts!

Naturally, displaying “Free shipping” prominently on the application page worked well for Tinkoff.

free shipping

Note: Although free shipping was already mentioned on the original page’s top right corner, it didn’t have much contrast against the background — making it potentially unnoticeable to visitors. The variation, however, increased the chances of visitors spotting the much loved free shipping offer.

Reassuring Users About Tinkoff’s Credibility:

Reassuring users at each step of a conversion process helps improve the conversion rate. This is the reason why trust badges, testimonials, and social proof work for so many websites.

Likewise, the features-box on the application page reassured users about Tinkoff’s credibility. The box mentioned how Tinkoff is the leading internet bank providing more than 300,000 points of recharge, and how its service is completely digital — users don’t ever have to visit bank branches. This helped in making users trust the bank’s services, thereby increasing form submits.

Related Resource: 32% Increase in Conversions by A/B Testing for The Right Reasons

Why Did The First Variation Fail?

The “More details” link on the first variation page led users to a new page where additional information about the credit card was present. This feature, however, distracted some users away from the application form. And since web users have a short attention span, some users probably didn’t return back to complete the form — reducing the total number of conversions.

Furthermore, users had to make an effort leaving the application page to go on the new link, browsing through the content there, and returning back to the previous page to submit the form. Because of this effort involved, many users wouldn’t have visited the “More details” page — nullifying any value that the page could have provided them with. And without enough information, many users wouldn’t have converted.

Unsure users are the first to bounce off. Keep reassuring them about your credibility. Tweet: 3 Ways Tinkoff Bank Optimized Credit Card Conversions (Case Study) Read more at https://vwo.com/blog/tinkoff-case-study

TEST #2: Gamifying the Form Using a Progress Bar

The Hypothesis

Providing a “progress bar” on top of the four-step application form will motivate users to fill the form completely, resulting in a higher conversion rate.

The Test

Here again, Tinkoff designed two variations of the original form page.

The first variation had a yellow banner-like progress bar, right above the form. The progress bar highlighted the step on which the user was present. It also displayed the user’s progression on the form graphically, using a black line at its bottom. The bar mentioned the probability of approval of a credit card based on how far the user had filled the form.

This is the first variation.

8

The second variation also had a progress bar, but with a different design.

Similar to the first variation, the second variation’s progress bar displayed the form’s step number and the probability of approval of a credit card. But, the progress bar here was green in color. And it didn’t have any additional black line to show the user’s progress on the form. Instead, the bar itself represented the user’s progression graphically: The green portion of the bar grew as users moved further on the form.

Take a look.

10

The test ran on more than 190,000 visitors for a period of 39 days.

The Result

Both the variations outperformed the control!

The first variation had a 6.9% higher conversion rate than the control.

However, the second variation was the overall winner. It improved the conversion rate of the page by a robust 12.8%.

Both the variations had a 100% chance to beat the original page.

The Analysis

Curbing Users’ Anxiety:

Nobody likes filling up long forms on websites. Users only do that when they expect equal or higher value in return.

When users find lengthy forms, they often become anxious. This happens because they aren’t sure of gaining satisfactory value after completing the form. Many times, user’s anxiety leads them to bounce off the form (or the website altogether).

However, there are various website elements that are used to reduce users’ anxiety on a website — progress bar being one of them.

Progress barSource

A progress bar helps curb anxiety of users by providing them a visual cue about the effort required to complete a process. It reassures users that the process will be completed in due time and effort, keeping them from bouncing off the page.

The above fact has been concluded by various studies conducted on website and application designing.

Gamifying’ Users’ Experience:

Almost all of the web users today would have played video games on some platform or the other. It’s safe to say that most of them are familiar with progress bars displayed within such games. The progress bars, there, are usually associated with users’ progress within a game, telling how far they’ve reached in finishing the game’s objective (or beating a certain opponent in the game).

progress bar in games

The progress bar on Tinkoff’s credit card application form introduced a similar gaming experience to its users. The progress bar could only be fully filled when users completed their whole form. Whenever users found a partially filled progress bar, they had an additional motivation to fill and submit the form.

The fully filled progress bar, later, provided users with a sense of achievement.

‘Rewarding’ Users:

The progress bar deployed another gamification technique — reward.

On Tinkoff’s form page, the technique was put into force using an overlaid text on the progress bar. For instance, when users were on the second step of the form, the text read “The probability of approval is 30%” and “Get 10% for Step 2 completion.” Since users were investing time and effort in applying for the credit card, they would really want to have the highest probability for its approval. By realizing the importance of each step of the form for their application’s approval, users were further motivated to complete them.

Why Did The Second Variation Perform Better Than The First?

Because the second variation’s progress bar had greater visibility on the application page.

Providing contrast to your key elements on a web page is one of the fundamental principles of web design.

The first variation’s progress bar was a black line, and on the bottom of a yellow banner. Since the color scheme of the overall page included white, grey and yellow, the progress bar and the banner didn’t have much contrast. For some users, the progress bar could have easily blended in with the page’s theme. Moreover, the progress bar was quite thin, possibly making it even harder for some users to notice it.

progress bar close up

The second variation’s progress bar, on the other hand, flaunted green color — giving it ample contrast and visibility on the page. The width of the bar, too, was large enough to make it noticeable to the users. And once the the progress bar was noticed by the users, its persuading factors started to work on them.

Gamify your online forms to increase form-submits and conversions. Tweet: 3 Ways Tinkoff Bank Optimized Credit Card Conversions (Case Study) Read more at https://vwo.com/blog/tinkoff-case-study

TEST #3: Letting Users Fill Their Details Later

The Hypothesis

By giving users an option to fill up their passport details on the application form later, the number of form-submits will increase.

The Test

This test involved only one variation that was pitted against the control.

On the form’s second step, users were required to submit their passport related information. The variation gave an option to the users for completing this step later, using a “Don’t remember passport details” checkbox. Upon clicking this checkbox, a small window appeared, asking users to choose a medium — phone or email — to provide their details later. Users could complete the form whenever they had the passport details handy with them.

Here are the screenshots of the checkbox and the pop-up window.

fill details later - checkbox
Checkbox
Fill details later -- box
Pop-up

The test ran on over 265,000 visitors for a period of 23 days.

The Result

The variation won over the control page convincingly. It improved the conversion rate of the form by a whopping 35.8%. The after-filling conversion rate, too, increased by 10%.

The variation had a 100% chance to beat the control.

The Analysis

Acknowledging Users’ Issues:

The second step on the application form required detailed information about users’ passport. The form asked for information like passport’s date of issue, series and number, code division, and more. Most of the users don’t remember these details about their passport by memory. In order to complete the form, the users had no option but to take out their passports and look for the required information. However, some users wouldn’t have their passport handy with them while completing the form. This would have forced them to leave the form.

Now, with the option to fill out the passport details on the form later, users didn’t have a reason to leave the application form in the middle.

Providing Freedom to Users:

Once users clicked on the “Don’t remember passport details” checkbox on the page, they were met with two options for filling up the form later. They could either have the incomplete form’s link emailed to them, or they could choose the ‘phone’ option. The latter option allowed users to fill up the form through a phone call with Tinkoff’s executives.

Completing the form through a telephone call, particularly, reduced a great deal of effort that users had to make.

Virtually Shortening the Form-length:

Once users chose to fill their passport details later, they were only left with two steps to  complete out of the total four. So effectively, users had already covered half of the application form. And this information was reinforced by the progress bar on top of the form.

As users had completed the first half of the form like a breeze, they looked forward to completing the next half equally quickly.

success kid

In addition, the option to fill the passport data through a phone call, actually, converted the form into a three-step process.

Addressing the convenience of your users should be your top priority, always. Tweet: 3 Ways Tinkoff Bank Optimized Credit Card Conversions (Case Study) Read more at https://vwo.com/blog/tinkoff-case-study

Conclusion

Conversion Rate Optimization is not about testing random ideas on your website. It is about improving your website’s user experience through a coherent process. This process involves identifying areas of improvement on your website and suggesting changes based on traffic data and user behavior, and best practices. It’s followed by A/B testing these changes and learning about the effectiveness of the changes. Only when the changes improve the conversion rate of your website, you apply them permanently.

The post 3 Ways Tinkoff Bank Optimized Credit Card Conversions – Case Study appeared first on VWO Blog.

See original article – 

3 Ways Tinkoff Bank Optimized Credit Card Conversions – Case Study

Four Ways To Build A Mobile Application, Part 3: PhoneGap

This is the third installment in a series covering four ways to develop a mobile application. In previous articles, we examined how to build a native iOS and native Android tip calculator. In this article, we’ll create a multi-platform solution using PhoneGap.
Adobe’s PhoneGap platform enables a developer to create an app that runs on a variety of mobile devices. The developer accomplishes this largely by writing the user interface portion of their application with Web technologies such as HTML, CSS and JavaScript.

Continue reading: 

Four Ways To Build A Mobile Application, Part 3: PhoneGap

Four Ways To Build A Mobile Application, Part 1: Native iOS

The mobile application development landscape is filled with many ways to build a mobile app. Among the most popular are:
native iOS, native Android, PhoneGap, Appcelerator Titanium. This article marks the start of a series of four articles covering the technologies above. The series will provide an overview of how to build a simple mobile application using each of these four approaches. Because few developers have had the opportunity to develop for mobile using a variety of tools, this series is intended to broaden your scope.

Link:  

Four Ways To Build A Mobile Application, Part 1: Native iOS