DEV Community

ashleygraf_
ashleygraf_

Posted on • Edited on

Avoiding false alarms

During my first year as a tester, I had a false-alarm rate I wasn't proud of until I adopted a routine, some borrowed from my mentor Alan Giles, and some borrowed from the team at Digital Purpose, and some I found for myself. A high false-alarm rate decreases trust between yourself and the team, so it's important to get it down. This is what I do.

  • Don't panic. Consider all the possibilities. Are you missing something? What are you assuming?

  • Is your test environment set up properly?

  • Are you using the correct build?

  • Is the replicator using the correct build?

  • are you using the test user you think you’re using?

  • Check the server is refreshed. Your team might have alerts connected to your deployment pipeline of choice. This comes in handy.

  • Hard refresh your browser tab. That is Ctrl+F5 or Ctrl+Shift+R on Windows.

  • Re-test sequences that find bugs to check you saw what you think you saw. The issue here is not to re-test so many times you run out of time to fix issues. I've heard of people that record all their testing efforts to make back-tracking easier.

  • However, don't go too quickly. Accuracy >>>>>>> Speed. Or, it's not speedy if it's wrong and you have to do it again.

  • Test with accounts created before and after the change - Does the feature / bug fix need to work retrospectively? If it's for a green-field release, probably not. If it isn't, probably. It may also be best to just always test with a fresh account.
    Account set-up can be speeded up with a Postman API request, or even a set of them, with the Runner hooked up to the Auth details of the new user (first step done manually to fetch this).

  • Confirm the requirements with the rest of the team. Does it qualify as a requirement, written or un-written?

  • Check your test data is set up properly. Are you testing what you think you're testing, or it is another scenario?

Top comments (0)