Automated browser testing with Selenium can be a developer's best friend, and at the same time, their worst enemy. Tests that once seemed to work well can mysteriously start to break, fail only sometimes, or slow to a crawl. But, your relationship with Selenium doesn't have to be so complicated.
Trust and stability are paramount to maintaining a robust relationship with your test, and we've learned a lot of valuable lessons about cultivating a healthy partnership between the engineers and their tests while developing our flagship app, Shoutbase. We've compiled a list of practices that have helped us love our tests as much as we love our production code.
Testing is a matter of trust. You need to be able to rely on your test to run exactly the same way, getting the same result regardless of how many times it is run, either independently or part of a test suite. Without rigid consistency, your expected result becomes a moving target. When something goes wrong, it's considerably more challenging to isolate the issue when you don't know exactly what to expect in the first place. Additionally, tests that are allowed to affect others also share their problems. This can result in error leakages making it hard to find the root cause of an issue.
Thankfully, enforcing idempotence can be achieved with a couple simple steps. First, start with a reliable state at the beginning of each individual test. Create a utility to setup this sanitized start environment. The utility should do things like logout, clear everything from storage such as cookies, auth tokens, or anything else your app might require to achieve a clean state.
Second, take care to write self contained tests. While you might be tempted to use data from an upstream test as the setup for another, this should absolutely be avoided. You don't want future changes from upstream tests to break others. Instead, always create users and data points from scratch.
Additionally, try to clean up after your tests, by deleting any data that might have been created at the end of each test. By containing the data within each test, you should be able to trust your tests will not impact each other.
As software engineers, we love reusing code. However, employing production code within tests can be dangerous. Even if you introduce functional testing into your project toward the end of the development cycle, your code will likely undergo significant future changes. Altering a piece of production code used heavily throughout your tests can cause gnarly ripple effects, including error leakages, and make it harder to determine where problems lie. This can cause confusion among your testing concerns. Are you testing a utility you imported, or are you testing something in the UI?
API client code is perhaps the most tempting to import and reuse within tests. But again, this should be avoided. Instead, create test utility functions to make raw API calls, that will not be subject to as much iteration and change as the production code.
Selenium tests the UI, and should be reserved for end-to-end user interaction tests. Selenium can be slow and cumbersome at times, especially when you've employed these strategies to ensure stability. You don't want to slow your test suite further by using Selenium to check a sorting function works correctly when a faster unit test could do the trick.
For background functions the user doesn't see, such as utilities and client functions, you should use other testing suites and keep these concerns separate.
As stated before, Selenium tests typically take longer to run due to setup and browser latency issues. There are a couple key ways to reduce this burden.
First, avoid programming long pauses and excessive retries. This will only extend the run of a problematic test. Sometimes sleeps and retries can be useful to help your tests handle problems outside of your code, such as browser or webdriver issues. However, keep in mind these are expensive and should be used sparingly. They can also cover up race conditions, which can come back to bite later.
Next, always throw an error as soon as there is a problem. Do not allow the test to move on to the next step. Not only is the runtime of the selenium tests costly, but equally expensive is the amount of time you might spend wading through layers of cascading test failures looking for the root cause.
When a test fails, the developer transforms into a detective. Collecting evidence is key to efficiently and effectively finding the bug. When it comes to issues that surface on the UI, the potential sources of error are great. Not only is the frontend code suspect, but also the backend, the browser, and potentially the testing framework itself. These are the key pieces of evidence we collect when debugging test failures:
- log reports from the backend
- screen shots of the browser the moment a test fails
Maintainability is just as key in your tests as it is in your production code. Your tests are a living document, and are therefore a tax. The same techniques you use for writing good code can be applied to your tests to make it easier to pay the tax as the production code evolves.
First, keep your tests DRY to achieve maintainability. Speed up the writing and refactoring process by making use of utility functions to handle repeated tasks.
Second, keep each test scoped to a single concern. Avoid testing code or behavior already tested elsewhere. As an isolated piece of production code changes, you don't want to find yourself having to fix several different tests.
Third, take care to document your test code. You can use a self documenting framework such as cucumber, or add comments to document the intent of each test. Documentation significantly decreases the amount of time it takes to untangle the intent of a test, and makes it easier for you or someone else to refactor your code later. It can also help you get an idea of your testing coverage, or how much of the functional UI code is tested.
We believe for every user-facing interface, functional testing is essential. Over time, we have honed our techniques to make Selenium tests robust, easy to maintain, and even enjoyable to write. There are some common, avoidable pitfalls that can make functional tests a nightmare. We hope you can follow these tips and watch your tests run like a dream.