Since the beginning of my six years of experience as a QA engineer, automated testing of applications in different web browsers has always been a pain point. It requires handling a farm of multiple operating systems and multiple browsers and takes our precious time off actually implementing tests or doing other QA tasks.
Issues of local farming
Here at Tanker we develop an open source privacy SDK. We are proud of our continuous integration infrastructure, but it’s not without caveats. We started out doing in-browser and node tests using Karma and the Chai test framework. In-browser tests were run on Edge and IE (on Windows), Safari (on macOS), and Firefox and Chrome (on Linux). You can already see that some very common configurations — like Chrome on Windows — weren’t being auto-tested. We also only used the latest OS versions, which is not what real-world users do.
We also encountered some other issues, which made us consider another solution:
We had issues with Edge And IE, forcing us to run a script to delete cache files before every build
Another issue on Windows is that you cannot launch graphical applications from a service, so we use a Python server launched manually as a workaround
macOS tends to de-prioritize Safari, making it very slow when there is no apparent activity on the browser (like a mouse click), so we have a script that brings Safari to the foreground periodically
Karma is a little bit flaky when handling multiple browsers in parallel. It takes a big farm to run multiple browsers in an acceptable time for developers
With all this information we wanted to give the well-known BrowserStack a try.
Trying out BrowserStack
The first step is to create a BrowserStack account that allows you to do automation. This is free for open source projects, like ours. You should now have a username and an access token. After that, you can add the karma-browserstrack-launcher to your project, as a dev dependency. We do this using yarn:
$ yarn add --dev karma-browserstrack-launcher
Now it’s time to configure karma to use this launcher. It is usually done in the karma.conf.js file:
config.set({
…
customLaunchers: {
ChromeWindows: {
base: ‘BrowserStack’,
browser: ‘Chrome’,
browser_version: ‘71.0’,
os: ‘Windows’,
os_version: ‘10’,
},
},
})
This adds one custom launcher to run your tests on BrowserStack servers! But for that you need to be authenticated. You probably don’t want to leave your credentials in clear in the source code, but you can configure your favorite CI/CD tool with the user name/access key as environment variable. Locally you can do:
$ export BROWSER_STACK_USERNAME=<YOUR_USER_NAME>
$ export BROWSER_STACK_ACCESS_KEY=<YOUR_ACCESS_KEY>
You now run your tests suite with:
$ yarn karma --browsers ChromeWindows
And you see what’s going on in the browser with a video recording.
From now on, it’s up to you to add any configuration relevant to your case. BrowserStack has a tool to help you with that.
Fine-tuning
To sort your builds in the BrowserStack web interface in case you have multiple projects, you can add a project name in the Karma configuration. Another useful one: running a big test suite on a mobile device, in a datacenter far away from your office, can easily take more than 5 minutes, so you may want to raise the timeout:
config.set({
…
browserStack: {
project: ‘<YOUR_PROJECT_NAME>’,
timeout: 600,
},
})
You can add a BrowserStack to you GitLab/GitHub project page. It takes a couple of commands to set it up:
$ curl -u "<YOUR_USER_NAME>:<YOUR_ACCESS_KEY>" https://api.browserstack.com/automate/projects.json"
Find your project id, and:
$ curl -u "<YOUR_USER_NAME>:<YOUR_ACCESS_KEY> "https://api.browserstack.com/automate/projects/<YOUR_PROJECT_ID>/badge_key"
You’ll get a badge key. Now in your README file:
[![BrowserStack Status](https://www.browserstack.com/automate/badge.svg?badge_key=<YOUR_BADGE_KEY>)
Next, to get your tests results in the BrowserStack web interface, you may add the BrowserStack test results reporter. For instance if you already use the Mocha reporter:
config.set({
…
reporters: [‘mocha’, ‘BrowserStack’],
})
Ultimately, we encountered an issue with some browsers that were unable to connect to localhost (Firefox 65 on Windows for instance). This issue can be worked around with:
config.set({
…
hostname: ‘bs-local.com’,
})
This is actually needed for Safari on iOS testing.
You can now add those browsers to your favorite CI/CD tool. We added it to travis, it will run parallel builds in separate containers for each browser.
BrowserStack is easy to set up and use…
As you can see, we did not write any actual code for this tutorial. We only added a few lines of configuration. BrowserStack is really easy to set up and use.
… but does not solve all our issues
However, it does not solve all the issues I listed at the beginning of this article. Tests are running smoothly on Safari, it is just a little slow, but it does not impact us, since we can run more parallel builds than in our local farm, and it does not impact our other projects.
In BrowserStack, Edge is really stable. We don’t have to perform any cleaning as we do with our local farm, because we always have clean Windows instances. However, this comes with an issue for us: we use local storage and too much of it for Internet Explorer. In our local farm, we configured the machine to accept more than the default amount storage for our test suite (it is not an issue in real life for our users). According to the support it is not possible to do that. So, as it is, we cannot use automation on IE.
Conclusion
BrowserStack is a good tool that allows us to quickly configure automated tests for a wider range of configurations (mobile phones, older versions of browsers/operating systems…). But unfortunately, for us it is not a game-changer. We initially planned on getting rid of our local farm, but we can’t until certain issues are overcome:
We cannot run automated tests on IE
Tests are slower because… well it is not our local network
That said, BrowserStack is a great addition to our local farm testing, that provides us with lots of flexibility. As a bonus, it shows the outside world that we test all kinds of configurations and that we care for all users, not just those with the latest devices. So far, we test 11 different combinations.
PS: This article was originally written by Jérémy Tellaa and published on Tanker’s Medium. As you might not be on Medium yourself, we've reproduced it here to give you a chance to see it in your notifications feed.
Top comments (0)