DEV Community

Cover image for Best Practices for Android UI Testing
Roman Chugunov
Roman Chugunov

Posted on

Best Practices for Android UI Testing

In the previous article, we have discussed unit testing frameworks for Android that have recently become popular or were always a development standard. They are always placed at the foundation of the Test Pyramid. In this article, we will jump to one level higher, namely UI testing frameworks.

Image description

As with the previous article, I won’t touch upon the basics of writing tests using UI Automator and Espresso since it’s implied that you are already familiar with them. But I will offer you some advice on how to make your life easier when writing UI tests, as well as how to solve the most common problems. It’s not always possible to solve them with standard tools, so various plugins, extensions and frameworks often come to the rescue. I will cover those I have worked with myself and those I can recommend to you if you haven’t yet worked with them.

Espresso

So, UI tests. Espresso, a Google framework, is the gold standard here. There is a lot of documentation for Espresso, but in a nutshell almost every test is based on the following algorithm.

  • For elements found with ViewMatcher
  • Do some ViewAction
  • Check the result displayed on the screen using ViewAssertion
@RunWith(AndroidJUnit4::class)
class MainActivityTest {

    @Test
    fun test_clickRefreshButton_freshItemsAreLoaded() {
        onView(withId(R.id.nameEditText)).perform(typeText("Alex"));

        onView(withId(R.id.greetButton)).perform(click());

        onView(withId(R.id.greetingTextView)).check(matches(withText("Hi, Alex!")));
    }
}
Enter fullscreen mode Exit fullscreen mode

Often, when tapping on some button, another screen may open. For this, we have another tool, IntentMatcher, that can check whether certain Intent was launched.

These four components, ViewAction, ViewMatcher, ViewAssertion, IntentMatcher, are the foundation of all UI tests. The example above is very simple, but on a complicated screen where a lot happens, the body of our test can grow significantly, and it will be a lot harder to read it. In order to improve the structure and readability of the tests, various design patterns are applied.

Design Patterns for Test Readability

  • Page Object Pattern: This pattern implies that each screen of an app is presented as a separate class that contains all interface elements and methods to interact with them. Therefore, a test scenario does not depend on the details of UI implementation and can be easily adapted to changes in design. This pattern is used in frameworks Kakao and Kaspresso (I will discuss it later in this article).
  • Screenplay Pattern: this pattern is an improved version of Page Object Pattern that adds another two components: actors and abilities. Actors are roles of users that perform actions in the app. Abilities determine how actors can interact with the app (for example, through Espresso or UiAutomator). This pattern allows you to write tests with a high level of abstraction and better display the app’s business logic.
  • Robot Pattern: this pattern is similar to Screenplay Pattern, but, intead of actors and abilities, robots that encapsulate the logic of interaction with screens are used. Robots can be re-used in different tests and combined with each other. This pattern simplifies the structure of tests and saves you from code duplication.

Espresso code written with the Robot pattern looks like this:

@RunWith(AndroidJUnit4::class)
class MainActivityTest {

    @Test
    fun test_clickRefreshButton_freshItemsAreLoaded() {
        login {
            setEmail("mail@example.com")
            setPassword("pass")
            clickLogin()
        }
        home {
            checkFirstRow("First item")
            clickRefreshItems()
            checkFirstRow("First ")
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

And the robot that encapsulates Espresso logic will look like this:

class HomeRobot {
    fun checkFirstRow(text: String) {
        onView(withId(R.id.item1)).check(matches(withText(text)))
    }

    fun clickRefreshItems() {
        onView(withId(R.id.button)).perform(click())
    }
}

fun home(body: HomeRobot.() -> Unit) {
    HomeRobot().apply { this.body() }
}
Enter fullscreen mode Exit fullscreen mode

Therefore, if the test fails, we will know at what step something went wrong and what to do with this.

Ideally, our Espresso tests must be as simple and readable as unit tests. It’s not always possible if you cover with a test an entire flow consisting of several screens, at once. In this case, it is impossible to test one specific screen with high quality.

Flow Testing vs Screen Testing

When your Espresso test covers an entire flow, your test will look like this:

  • Open Screen A
  • Do Action 1
  • Make sure that the action was successful
  • Do Action 2
  • Make sure that Screen B was open
  • Do Action 3
  • Make sure that the action was successful

Here, we go through a so-called happy path and do not check any corner cases. In reality, this test is called end-to-end (e2e) test, and it must be as much as possible separated from the screen implementation (ideally, it must be written not using Espresso, but with other frameworks providing a higher level of abstration (UI Automator, Appium, or something else)). Due to their complexity, such tests can often fail, and it’s quite difficult to fix them. Also, they are quite expensive to run on CI, they can be run for minutes and even hours, so it’s not something you would like to run on every pull request. This is why, there can’t be a lot of such tests in a project.
Instead, we can have more atomic UI tests, which test only a specific screen. Such test would contain a simple set of actions:

  • Open a screen in onBefore
  • Do Action 1
  • Make sure that the action was successful

There can be a lot of such tests, they can cover both the happy path and various corner cases. Also, such tests are usually more stable. Thanks to their simplicity, the chance that something goes wrong and the test fails is far less. Though sometimes you can have a situation where your test is correct and the business logic it covers is also correct. You expect that the test will be green in 100% of the cases, but it turns out to be red in 1 case out of 100. Such tests are called flaky.

Flakiness

Probably, the main source of issues with flaky tests is network and other background operations. The thing is that when we perform a certain operation in a test (for example, click a button) and expect a certain result, this result may be delayed. By default, Espresso framework can wait for the completion of various operations, but it is about the operations with UI interaction only (for example, when another Activity opens with animation). Espresso doesn’t know anything about background operations related to our business logic. This leads to a possible failure of the test onView(withId(R.id.item1)).check(matches(withText(text))) because the expected text has not been loaded or displayed yet. However, the test will not always fail, but only when the Internet connection is slow on the emulator where the test was running. This is perhaps one of the most common issues with flaky tests. There are various methods to solve it:

  • Add Thread.sleep(…) to our test. This is a brute-force method and will help in the majority of cases, but, firstly, we are not immune from the delay being longer than sleep at any moment, and then the test will still fail. Moreover, sleep adds unnecessary delay to each run of the test, and even if the server works fast enough, our test will still run longer than we need.
  • Add Timeout and Retry to ViewMatcher. Something like this:

    fun onViewWithTimeout(
        retries: Int = 10,
        retryDelayMs: Long = 500,
        retryAssertion: ViewAssertion = matches(withEffectiveVisibility(Visibility.VISIBLE)),
        matcher: Matcher<View>,
    ): ViewInteraction {
        repeat(retries) { i ->
            try {
                val viewInteraction = onView(matcher)
                viewInteraction.check(retryAssertion)
                return viewInteraction
            } catch (e: NoMatchingViewException) {
                if (i >= retries) {
                    throw e
                } else {
                    Thread.sleep(retryDelayMs)
                }
            }
        }
        throw AssertionError("View matcher is broken for $matcher")
    }
    

This approach is used in the Kaspresso framework, which I will speak about below. This is far better than adding Thread.sleep(), but it still doesn’t guarantee that the timeout you set will be longer than the server delay. Moreover, such code hides slow pieces of your code, which is why, instead of introducing timeout and retry, it’s better to study why the server responds so long in this place and whether you should approach the issue from another side.

IdlingResource

As mentioned above, Espresso knows about Idle condition at the UI level, that is every next ViewAction from your test will be launched only when the previous one has finished and the system came to Idle state. But if you have some coroutine or Rx Observable that runs in the background and returns the result asynchronously, we need to somehow inform Espresso that we want to wait for the operation completion and perform the next ViewAction/ViewAssertion only after that. You can read about this in detail in the official documentation. Here I'll just give you a few hints that can be helpful in practice.

  1. Your production code shouldn’t know anything about IdlingResource. You may have some interface in your app

    interface OperationStatus {
    
        fun finished()
    
        fun reset()
    }
    

    And turn to this interface in the app to inform the test that the operation is complete:

    coroutineScope.launch(coroutineDispatcher) {
        viewModel.usersFlow.collect {
            // show UI
    
            operationStatus.finished()
        }
    }
    

    And in androidTest, you will have the implementation of this interface that will know about IdlingResource. Correspondingly, you will be able to register it in IdlingRegistry.

    class OperationStatusIdlingResource : OperationStatus {
    
        val idlingResource = CountingIdlingResource("op-status")
    
        override fun finished() {
            idlingResource.decrement()
        }
    
        override fun reset() {
            idlingResource.increment()
        }
    }
    
    @Test
    fun test_clickRefreshButton_freshItemsAreLoaded() {
    
        val idlingResourceImpl = OperationStatusIdlingResource()
        IdlingRegistry.getInstance().register(idlingResourceImpl.idlingResource)
    
        // Test
    }
    

    How do you pass your OperationStatusIdlingResource to the app, given that it exists in tests only? Here, the second principle will help us out.

  2. Use DI. It doesn’t matter if you use Hilt, Dagger or Koin, you will always have a dependency tree and a list of modules where we declare these dependencies (in our case, OperationStatus). For production code, you need to create a default dummy implementation that won’t do anything, and for a test, you need to override the module where the source dependency was, so that the DI tree worked with it. I will explain how to override DI dependencies below.

  3. Do not use IdlingResource for special cases. In the example above, we have used it to signal that our data has been loaded at the screen opening. This is a special case of asynchronous data upload. Even within one screen, you can have multiple asynchronous operations, and creating a separate IdlingResource for each of them is excessive. It’s far better if you identify places where concurrency is introduced. For example, if your app is based on coroutines, asynchronicity will be introduced when Dispatchers.Default and Dispatchers.IO are used. It means that, in the tests, you need to replace these dispatchers with some test version, adding IdlingResource to it:

    class SimpleViewModel(
        private val usersRepository: UsersRepository,
        private val coroutineScope: LifecycleCoroutineScope,
        private val coroutineDispatcher: CoroutineDispatcher = Dispatchers.Default,
    ) : ViewModel() {
    
        fun loadUsers(filter: FilterType) {
            coroutineScope.launch(coroutineDispatcher) {
                val allUsers = usersRepository.getUsers()
                // ...
            }
        }
    

    And we can pass the following Dispatcher in the tests via DI:

    class IdlingDispatcher(
        private val wrappedCoroutineDispatcher: CoroutineDispatcher,
    ) : CoroutineDispatcher() {
    
        val counter: CountingIdlingResource = CountingIdlingResource(
            "IdlingDispatcher for $wrappedCoroutineDispatcher"
        )
    
        override fun dispatch(context: CoroutineContext, block: Runnable) {
            counter.increment()
            val blockWithDecrement = Runnable {
                try {
                    block.run()
                } finally {
                    counter.decrement()
                }
            }
            wrappedCoroutineDispatcher.dispatch(context, blockWithDecrement)
        }
    }
    

Using Fake objects in DI

Another useful practice that should be used in tests. By the way, if you don’t use DI in your project, you should start doing it.

In the examples above, I have described how to use fake implementations of IdlingResource in our production code, but haven’t discussed how to introduce them in tests. Let’s cover it in more detail using Dagger as an example.

If you don’t use dagger-android and prefer create a component manually, your Application will look more or less like this:

open class MyApplication : Application() {

    private lateinit var appComponent: ApplicationComponent

    override fun onCreate() {
        super.onCreate()

        appComponent = DaggerApplicationComponent
            .builder()
            .usersModule(UsersModule())
            .dataModule(DataModule())
            .build()
    }
}
Enter fullscreen mode Exit fullscreen mode

In DataModule, we declare our dispatchers, and in UsersModule, we define the logic related to UsersRepository.

@Module
open class DataModule {

    @Provides
    @MyIODisptcher
    open fun provideIODispatcher(): CoroutineDispatcher = Dispatchers.IO
Enter fullscreen mode Exit fullscreen mode

Please note that MyApplication, DataModule and provideIODispatcher are declared open so that it was possible to inherit from them in the tests.

Now, carry the creation of module DataModule over to a separate method:

open class MyApplication : Application() {

    private lateinit var appComponent: ApplicationComponent

    override fun onCreate() {
        super.onCreate()

        appComponent = DaggerApplicationComponent
            .builder()
            .usersModule(UsersModule())                 
            .dataModule(createDataModule())
            .build()
    }

    open fun createDataModule() = DataModule()
Enter fullscreen mode Exit fullscreen mode

Then, in the androidTest folder, create a test class Application and redefine DataModule in it.

class MyTestApplication: MyApplication() {

    override fun createDataModule() = TestDataModule()
}

class TestDataModule {

    override fun provideIODispatcher(): CoroutineDispatcher = IdlingDispatcher()
}
Enter fullscreen mode Exit fullscreen mode

In provideIODispatcher, we create an instance of our IdlingDispatcher that we’ve discussed above, and now, it will be used by default in all UI tests.

But this is not enough. We need to register our test app so that it would run together with the tests. For this, we will need to create a custom TestRunner where we will pass the name of the test app.

class MyApplicationTestRunner: AndroidJUnitRunner() {

    override fun newApplication(cl: ClassLoader?, className: String?, context: Context?): Application {
        return super.newApplication(cl, MyTestApplication::class.java.name, context)
    }
}
Enter fullscreen mode Exit fullscreen mode

Now, we register this TestRunner in build.gradle:

android {
    namespace 'com.rchugunov.tests'
    compileSdk 33

    defaultConfig {
        applicationId "com.rchugunov.tests"
        minSdk 26
        targetSdk 33
        versionCode 1
        versionName "1.0"

        testInstrumentationRunner "com.rchugunov.tests.MyApplicationTestRunner"
     }
Enter fullscreen mode Exit fullscreen mode

That’s all we need. In a way similar to IdlingDispatcher, we can also override other dependencies, replacing them with fake copies. For example, for UserRepository, such an implementation could look like this:

class FakeUserRepository: UsersRepository {

    var usersToOverride = listOf(
        User(id = "1", userName = "jamie123", age = 10),
        User(id = "2", userName = "christy_a1", age = 34)
    )

    override suspend fun getUsers(): List<User> {
        return usersToOverride
    }
}
Enter fullscreen mode Exit fullscreen mode

Now, when you need to place a custom list of users, you can inject FakeUserRepository directly into your test and set the list usersToOverride that must return to ViewModel directly. This can be useful if you want to test only the presentation-layer, without the data-layer. An additional advantage would be that the tests will run faster since there will be no delays from server requests. Below, I will tell how else you can mock a client-server logic using Wiremock and OkReplay.

Similar to Dagger, you can also provide test implementations in Hilt and Koin.

How else can you make your life easier when writing and using UI tests? Start using Robolectric.

Robolectric

Robolectric is a rather old framework, it dates back to the times when arm-v7 emulators were run on x86 machines. It was very slow, and developers came up with the idea to extract Android AOSP and compile it in a jar file, and then run Espresso tests against it like on a real machine. Since the tests are actually run on a local computer (same as JUnit tests), it works far faster than the same tests on an emulator or on a device.

Robolectric is very simple to use; you need to add only a couple of lines to your existing Espresso tests. Here is an example of a test from the official page:

@RunWith(RobolectricTestRunner::class)
class MyActivityTest {

    @Test
    fun clickingButton_shouldChangeMessage() {

        Robolectric.buildActivity(MyActvitiy::class.java).use { controller ->
            controller.setup() // Moves Activity to RESUMED state
            val activity: MyActvitiy = controller.get()
            activity.findViewById(R.id.button).performClick()
            assertEquals((activity.findViewById(R.id.text) as TextView).text, "Robolectric Rocks!")
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

There are a lot of advantages to using Robolectric but the main one is the speed of the tests. However, there are certain limitations: for example, it can’t work with a device’s sensors, system buttons and Location services. Also, don’t forget that you are working only with a fake implementation of Android. In reality, your code may work in the Robolectric environment but will fail on the emulator for some reason. According to Jake Wharton, you'd better use Robolectric only if you are sure that you know how your tested code operates under the hood. I wouldn’t recommend running tests that cover the entire flow or user interaction with UI, with Robolectric. Here are a couple of examples of how you should use Robolectric:

  • You can test individual components of your app that use the data layer. For example, you can test the work with Room DAO.

    1. Insert an object into a database
    2. Fetch an object with the same id from a database
    3. Check whether the same object has returned.

    A test written with Robolectric will be ideal for this.

  • Opening deep links. Here, you can launch a Broadcast event and check whether a certain intent with a certain set of parameters has opened.

  • Working with a file system. This is the app’s data layer, so you can test it in isolation from the rest of the flow. In this case, you might need Context, and Robolectric is the tool that can provide it.

Therefore, Robolectric and Espresso together help you test both individual components of your app and an entire screen and flow. However, there are scenarios that can't be covered by them. For example, when we need to minimize the app, go to system settings, or give some Runtime Permission to the app. In this case, UI Automator is your saviour.

UI Automator

Espresso tests have one important feature — they are supposed to know about the production code that they are testing. You can receive a reference to some class object or inject a fake component of the app in the test. You have access to the app’s resources (R.id… R.string…, etc.). Thanks to this, you can write quite flexible tests that adapt to the app’s logic. Additionally, you can change the app’s logic so that it would work a bit differently when run under test.

On the contrary, UI Automator tests see your app as a user would. They see text fields, buttons, they can interact with UI elements, but they don’t know about their internal logic and condition. You can’t change the app’s logic or get access to some resources. Nevertheless, with UI Automator, you can do the following:

  • Interact with system apps and settings, such as homescreen, notifications and device settings. For example, here is how you can get access to the list of system notifications:

    @Test
    @Throws(UiObjectNotFoundException::class)
    fun testNotifications() {
        device.openNotification()
        device.wait(Until.hasObject(By.pkg("com.android.systemui")), 10000)
    
        val notificationStackScroller: UiSelector = UiSelector()
            .packageName("com.android.systemui")
            .resourceId("com.android.systemui:id/notification_stack_scroller")
        val notificationStackScrollerUiObject: UiObject = device.findObject(notificationStackScroller)
        assertTrue(notificationStackScrollerUiObject.exists())
    
        val notiSelectorUiObject: UiObject = notificationStackScrollerUiObject.getChild(UiSelector().index(0))
        assertTrue(notiSelectorUiObject.exists())
        notiSelectorUiObject.click()
    }
    
  • Android UI Automator can test complex scenarios that include switching between apps, for example, exchange of content or using Intents. Espresso can test scenarios only within one app and cannot process switching between apps or intents.

  • You can check or change system settings directly when running the test. This article covers how you can connect to Wi-Fi in a test.

    // BySelector matching the just added Wi-Fi
    val ssidSelector = By.text(ssid).res("android:id/title")
    // BySelector matching the connected status
    val status = By.text("Connected").res("android:id/summary")
    // BySelector matching on entry of Wi-Fi list with the desired SSID and status
    val networkEntrySelector = By.clazz(RelativeLayout::class.qualifiedName)
        .hasChild(ssidSelector)
        .hasChild(status)
    
    // Perform the validation using hasObject
    // Wait up to 5 seconds to find the element we're looking for
    val isConnected = device.wait(Until.hasObject(networkEntrySelector), 5000)
    Assert.assertTrue("Verify if device is connected to added Wi-Fi", isConnected)
    

As you can see, UI Automator, Espresso and Robolectric give an opportunity to test an app’s components in an isolated way and to check very complicated flows that include interaction with other apps and Android components. By the way, you can also combine tests and have Espresso tests together with UI Automator tests.

Compose UI Testing

And what about Compose? For testing Compose, there is a special set of APIs that sees Composables as individual Nodes. It also includes selectors and actions with which you can find UI elements and perform certain actions with them.

composeTestRule.onNode(hasTestTag("Players"))
    .onChildren()
    .filter(hasClickAction())
    .assertCountEquals(4)
    .onFirst()
    .assert(hasText("John"))
Enter fullscreen mode Exit fullscreen mode

It’s good news that all these APIs are all about UI, similar to Espresso's ViewMatchers/ViewActions/ViewAssertions. It means that your tests will differ only slightly in syntax but you will still be solving the code cleanliness problems using patterns Robot or Page Object. To synchronize your background tasks and the test, you will be still using IdlingResource. In addition, you can substitute various objects in the DI tree, as we’ve done with the Espresso examples.

Moreover, you can still use Espresso API to test the integration of your app with an Android framework, for example, for navigation, animation and dialog windows.

@Test
fun androidViewInteropTest() {
    // Check the initial state of a TextView that depends on a Compose state:
    Espresso.onView(withText("Hello Views")).check(matches(isDisplayed()))
    // Click on the Compose button that changes the state
    composeTestRule.onNodeWithText("Click here").performClick()
    // Check the new value
    Espresso.onView(withText("Hello Compose")).check(matches(isDisplayed()))
}
Enter fullscreen mode Exit fullscreen mode

Also, you can find articles that describe running of ComposeUI tests on Robolectric. Personally, I haven’t done it because I prefer not to test UI logic outside the emulator.

WireMock / MockWebServer

What else can help us write tests? Frameworks that can mock our network requests. We have discussed the option when, by creating fake objects and passing them to the DI tree, we can emulate some of the business logic and test only high-level logic (the Presentation layer). However, in some cases, it is still useful to have tests that cover all layers of the app at once. And then you can stumble upon problems, such as an unstable server or complicated replication of the required conditions. All this makes your tests flaky — we have discussed this above. Fortunately, there are frameworks that allow you to mock the client-server part.

Wiremock and MockWebServer provide similar functions to substitute the client/server interaction. Let’s use MockWebServer as an example.

Before running each test, we must launch the server and stop it after the test completion. It’s convenient to do this by creating a custom TestRule.

class MockWebServerRule : TestRule {

    lateinit val server: MockWebServer

    override fun apply(base: Statement, description: Description): Statement {
        return object : Statement() {
            @Throws(Throwable::class)
            override fun evaluate() {
                val server = MockWebServer()
                server.start(8080)
                try {
                    base.evaluate()
                } finally {
                    server.shutdown()
                }
            }
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

If you expect a certain order of performance of your requests, then you can make a queue of them within your test.

@RunWith(AndroidJUnit4::class)
class MyEspressoTest {

    @get:Rule
    val mockWebServerRule = MockWebServerRule()

    @Test
    fun test_some_action() {
        mockWebServerRule.apply {
            server.enqueue(MockResponse().setBody("..."))
            server.enqueue(MockResponse().setBody("Hello world!"))
            server.enqueue(MockResponse().setResponseCode(401))
        }

        // your test case
    }
}
Enter fullscreen mode Exit fullscreen mode

Remember to provide test BaseUrl for Retrofit 127.0.0.1. You can do this the same way as we’ve covered above when we saved a fake UserRepository in TestApplicationComponent.

After this, you can run your test and, in response to all requests of your app, responses that you’ve added to the queue will return. Please note that the number and the order of requests must strictly match the number of responses set in the test. Otherwise, the test will most certainly fail. There is also an option to write a more granulated logic for processing app’s requests using request dispatcher Dispatcher:

@RunWith(AndroidJUnit4::class)
class MyEspressoTest {

    @get:Rule
    val mockWebServerRule = MockWebServerRule()

    @Test
    fun test_mock_with_dispatcher() {

        val requests = listOf(MockRule.USERS_REQUEST_FAILED_RESPONSE)

        mockWebServerRule.server.dispatcher = object : Dispatcher() {
            override fun dispatch(request: RecordedRequest): MockResponse {
                return requests.first { it.content.url == request.requestUrl }.content.response
            }
        }

                // YOUR TEST CODE

}

data class MockRuleContent(
    val url: String,
    val response: MockResponse,
)

enum class MockRule(val content: MockRuleContent) {
    USERS_REQUEST_POSITIVE_RESPONSE(MockRuleContent("/users/", MockResponse().setBody("[{\"name\": \"John\"}]"))),
    USERS_REQUEST_FAILED_RESPONSE(MockRuleContent("/users/", MockResponse().setResponseCode(404)))
}
Enter fullscreen mode Exit fullscreen mode

WireMock has similar functionality, but it’s harder to use than MockWebServer and the support of the Android community is not so strong. In addition, WireMock has an important feature: you can run the server in the record mode, so that you can run your tests in the online client/server interaction mode. WireMock can write all responses from the server and save them in a file. After this, you will be able to run the same test but with already recorded mocks. MockWebServer can’t do this, but OkReplay is perfect for this task.

OkReplay

With OkReplay, you can prepare test stubs on the basis of real server requests (similar to WireMock). To use it, you need to add the interceptor OkReplayInterceptor in the Retrofit/OkHttp test configuration. Then, with the Gradle plugin, you can run your tests in the mode of recording requests and responses from the service into .yaml files (they are called Tapes). Also, OkReplay provides a Gradle plugin that includes tasks to extract recorded tapes from a device or an emulator, as well as for their cleaning.

./gradlew clearDebugOkReplayTapes - Cleaning tapes
./gradlew pullDebugOkReplayTapes - Pulling tapes from the device or emulator
Enter fullscreen mode Exit fullscreen mode

In order to run a test in the tape record or replay mode, you need to pass the corresponding parameter TapeMode into the OkReplay configuration:

private val activityTestRule = ActivityTestRule(MainActivity::class.java)
private val configuration = OkReplayConfig.Builder()
    .tapeRoot(AndroidTapeRoot(InstrumentationRegistry.getInstrumentation().targetContext, javaClass))
    .defaultMode(TapeMode.READ_WRITE) // или TapeMode.READ_ONLY
    .sslEnabled(true)
    .interceptor(okReplayInterceptor)
    .build()

@JvmField
@Rule
val testRule = OkReplayRuleChain(configuration, activityTestRule).get()

@Test
@OkReplay
fun myTest() {
    ...
}
Enter fullscreen mode Exit fullscreen mode

OkReplay framework simplifies the network request testing process in Android apps, ensuring safer and more predictable results. However, there is an important factor: you need to have tests that can actually reproduce scenarios of the app behaviour (for example, specific errors from the server). It is often quite difficult to reproduce such conditions, and recording tapes is, therefore, problematic.

Developers have been trying to solve all the above issues for quite some time. You can find lots of open-source libraries on Github that, in fact, wrap Espresso API but also help to solve some of its issues and add various delightful features. I’m going to tell you about two of them, Barista and Kaspresso.

Barista

Barista is an additional layer of abstraction for Espresso, so it has several additional features as compared to Espresso. Firstly, it adds a number of methods for more comfortable work with UI elements.

For example, instead of original Espresso code:

@Test
fun myTest() {
    onView(withId(R.id.first_name))
        .perform(typeText(FIRST_NAME), ViewActions.closeSoftKeyboard())
    onView(withId(R.id.second_name))
        .perform(typeText(SECOND_NAME), ViewActions.closeSoftKeyboard())
    onView(withId(R.id.save)).check(matches(isEnabled()))
    onView(withId(R.id.save)).perform(click())
        // write your test as usual...
}
Enter fullscreen mode Exit fullscreen mode

We can write this:

@Test
fun myTest() {

    writeTo(R.id.first_name, FIRST_NAME)
    closeKeyboard()

    writeTo(R.id.second_name, SECOND_NAME)
    closeKeyboard()

    assertEnabled(R.id.save);

    clickOn(R.id.save)

    assertDisplayed(FIRST_NAME)
}
Enter fullscreen mode Exit fullscreen mode

The test has admittedly become more readable. A disadvantage is that you need to keep in mind even more various ViewMatcher/ViewAction and other elements than in regular Espresso. However, you can still use the Robot pattern to make your tests more expressive. You can learn more about available methods here. Barista also provides a number of convenient test rules, for example, for database and SharedPreferences cleanup:

// Clear all app's SharedPreferences
@Rule public ClearPreferencesRule clearPreferencesRule = new ClearPreferencesRule();

// Delete all tables from all the app's SQLite Databases
@Rule public ClearDatabaseRule clearDatabaseRule = new ClearDatabaseRule();

// Delete all files in getFilesDir() and getCacheDir()
@Rule public ClearFilesRule clearFilesRule = new ClearFilesRule();
Enter fullscreen mode Exit fullscreen mode

But whether you should use them or write everything yourself is a good question. Our apps often become so complicated that using standard tools is impossible, so every developer prefers to write their own custom logic.

Kaspresso

Another Espresso wrapper, created by Kaspersky Antivirus developers. But this framework provides far more features than Barista. Firstly, it makes you write tests using Page Object pattern by default. It is an undeniable advantage since the test will look cleaner and more abstracted from the used Views and their IDs.

object SimpleScreen : KScreen<SimpleScreen>() {

    override val layoutId: Int? = R.layout.activity_simple
    override val viewClass: Class<*>? = SimpleActivity::class.java

    val button1 = KButton { withId(R.id.button_1) }

    val button2 = KButton { withId(R.id.button_2) }

    val edit = KEditText { withId(R.id.edit) }
}
Enter fullscreen mode Exit fullscreen mode

Another important Kaspresso’s feature is that all ViewActions include some timeout to deal with flaky tests, which is useful in cases when we wait for a response from the backend. This may be convenient but is not too reliable as setting timeout is sometimes not sufficient. I recommend relying more on IdlingResource and predefined server responses using OkReplaly or server response mocks.

Additionally, Kaspresso provides numerous other useful features, like running adb prompts right from the test and interaction with the Android system. Taking into account that all of this is available in a ready-made solution, Kaspresso is an excellent substitute for the traditional Espresso.

Conclusion

Like many other developers, I’ve encountered many difficulties while writing Espresso tests. The tests are often complicated, slow and flaky. However, we now have a lot of libraries, frameworks and approaches that can significantly simplify and speed up the UI tests writing process. If I were to start working on a new app just now, I would write UI tests with Kaspresso at once. IdlingResource is a must for synchronization of background tasks and the test itself. If possible, use fake implementations of your repositories or record your requests and responses with OkReplay. Take care of the cleanliness and tidiness of your tests with Page Object and Robot patterns. If you follow these recommendations, you will be able to significantly improve the quality of your tests and reduce the number of bugs in the Android app code.

Top comments (3)

Collapse
 
vacxe profile image
Konstantin Aksenov • Edited

Thanks for the great topic mate! Highly appreciate that. Here few short comments from my end:

Another Espresso wrapper, created by Kaspersky Antivirus developers. But this framework provides far more features than Barista. Firstly, it makes you write tests using Page Object pattern by default. It is an undeniable advantage since the test will look cleaner and more abstracted from the used Views and their IDs.

This functionality provided by transitive dependency from Kakao and not implemented by Kaspresso

In example with KScreen you override

    override val layoutId: Int? = R.layout.activity_simple
    override val viewClass: Class<*>? = SimpleActivity::class.java
Enter fullscreen mode Exit fullscreen mode

I'm not sure that default KScreen have any variables like that.

Cheers!

Collapse
 
mdkabir profile image
Kabir

nice

Collapse
 
mbe308mbewe profile image
Regina Mbewe

This is awesome 👏