DEV Community

Karel De Smet
Karel De Smet

Posted on

Laying down the foundations for automated UI testing

Ah yes, automated UI testing. That thing you want to implement without really knowing why. Or how. But how difficult can it be?

Well, quite difficult as it stands. It’s not just picking a tool and getting started, you need a solid foundation to make this work. Don’t make the mistake of building a proof of concept and presenting a flawless demo at the next management board. That will only heighten the astronomical expectations of your seniors.

Know what to achieve

Some of those expectations might include:

  • Reducing overall testing workload
  • Reducing the amount of bugs that make it into the production environment
  • Improving customer satisfaction
  • ...

It's important to know what's expected. And don't just accept what they throw at you. Encourage your peers and seniors to be realistic. Test automation isn't going to bring world peace.

Make sure you know how to test it manually

Don’t try to run before you can walk. It’s better to develop a sound manual testing flow before you get to work on automation. For me personally, that means having a set of test cases per application in a spreadsheet (view an example here in Google ), with at least the following columns per test case:

  • ID
  • Name
  • Category
  • Prerequisites
  • Priority: Keep it simple. “Low”, “Medium” and “High” is usually sufficient. You can use this to see which test cases can be skipped when there’s not enough time to test them all. If you sometimes experience a lack of time to execute all test cases, it’s best to always start with the high and medium priority cases.
  • Scenario: Don’t describe the scenario with too much implementation detail. In my opinion, it’s better to write “log in” then “click on the username field, type in your username, click on the password field, type in your password, click on submit”. In most cases, that’s just too much detail.
  • Expected result: The same goes for the expected result. Too much implementation detail takes away the readability of your test cases. Yes, you’ll need that for automation. But you can detail that in a separate file the moment you need it.
  • Status: I usually tend to go with 5 possible values:
    • Not completed
    • Passed
    • Failed
    • Blocked
    • Not applicable
  • Tester name
  • Extra information: This could be a link to JIRA, Confluence, Stackoverflow or just plain text describing whatever extra information the tester should know. If the test case has a status of “Failed”, “Blocked” or “Not applicable”, you should elaborate why here.

Now, you’ll probably want to collect input and feedback from your team members on how these test cases fare. Are they clear enough? Too specific? Any important test cases missing? Ask your colleagues to execute (some of) these test cases and let them provide honest feedback.
Then work through the feedback and discuss how you handled their questions and remarks. Re-iterate if necessary. In the end, any colleague that understands the application from a business perspective should be able to finish the whole set without asking themselves too many questions.

Write down the testing agreements

Next, define testing agreements so everyone’s on the same line. Some examples:

  • Which browser should be used for desktop testing?
  • Which device should be used for mobile testing?
  • When should these test cases be executed? Daily, weekly, monthly, before or after every release ...? If the full set is relatively large, you could also indicate the frequency per category.
  • In which environment should these test cases be executed?
  • What test data can and should be used? It’s advisable to have a separate tab that contains all test data and update it regularly.

Again, discuss these agreements together with your co-workers (and perhaps your seniors) to ensure you’re all on the same line. Communicate them clearly and include them in a separate tab of the spreadsheet that contains your test cases.

Measure the time you spend on manual testing

If you want to prove that automated UI testing is decreasing your workload, you need to figure out what manual testing incurs. So when you’re performing tests, just start the timer and note down how much time it took to complete.

Don’t just stick to the actual testing part though. Include everything that comes with it such as registering bugs, re-testing, discussing outcomes with your team members and communicating the results, as these are also tasks that could be (partially) automated.

Aim for the middle

Et voila! You can now tell your boss you’re embarking on a test automation adventure, and you’ll return with automated test cases. How many automated test cases you ask? Well, that depends.

Looking back at the intro, I warned against the “proof of concept” (POC) method to get some management backing for the time you’ll spend automating test cases. The problem with this approach is that most people will take easy-to-automate test cases and then management will think “Did he automate this in 2 days? Think about what he can do in 2 months!”.

On the other hand, you usually can’t automate the whole set without getting some feedback in-between. It’s too risky, and you might want to use that session to ask for more resources (e.g. extra software or assistance from developers). So it’s best to aim for the middle and take a reasonable number of test cases of average complexity. In my experience, somewhere between 5 to 15 percent of the total number should suffice.

Picking a tool

Now it's up you to make an educated decision on which tool to use. Although I wouldn't get too hung up on the details. Just make sure you're not stuck by committing to a big investment or long-term contract, so you can still switch if you feel you need to.

Good luck!

Oldest comments (4)

Collapse
 
liviufromendtest profile image
Liviu Lupei

The Medium article that you mentioned contains a lot of false information and it's basically just a spam article that promotes Katalon Studio.
It is not a fair comparison.
Even if you will look at the author on Medium, all his articles are in the following format "Best Testing Tools" where he just promotes that tool.
I reported that article to Medium and everyone should do the same thing.

Collapse
 
carlosds profile image
Karel De Smet • Edited

Hi,
You're right, my mistake. It seemed like a reasonably objective piece but looking at the comments on Medium, it's not a valid reference as you rightly say. It also just slapped "2020" on while it's an article from 2017 ...
I'll delete it from my post and try to find an article that provides a non-biased overview of some of the most well-known tools. Although that's more easy than it sounds. If you know of any, please feel free to share :)

Collapse
 
liviufromendtest profile image
Liviu Lupei

Hi Karel, I'm glad I could help.

Please also report that article if you get a chance.

There are a lot of articles, most of them are biased and unfair with the clear purpose of promoting a certain tool.

I cannot recommend any article since I work for Endtest and it wouldn't be fair.

By the way, your article is great! Thank you for sharing that information.

Collapse
 
hemanthyamjala profile image
Hemanth Yamjala

This is very informative, Karel. Thank you for sharing this post. While we are on the subject of UI testing, check out this interesting post on accessibility aspect of UI.
cigniti.com/blog/web-accessibility...