As a developer or as a QA-engineer you found a bug in a piece of code, and you cannot fix it straight away (maybe its not an urgent one, or you do not have the knowledge or time to fix it), so what do you do? You file an issue-report and hope it will be fixed in a future sprint by someone. So far, so normal, but should you also write a test for it? I would argue yes, you should! Why?
- to make sure the behaviour does not get worse
- to make sure you notice when the bug is fixed accidentally as a side-effect of some other change
- to force the developer who fixes the bug, to look at the tests
- to have an easy and automated way of reproducing the bug
- to be agile and do test-driven-development. Isn’t that what you want? So better start by writing tests!
You can reach this goal by writing a specific type of bug-demonstrating acceptance test. Ideally this test should to be written in a way that they pass now and will fail when the bug is fixed. So we
- create a (acceptance) test that demonstrates the current behaviour (meaning the behaviour of the bug)
- also write test-steps that show the expected correct behaviour, but comment them out, so they do not run
- tag the test with the corresponding issue-number
So when the bug is fixed, the developer is forced to look at the tests, because they fail. Reading through the test-steps she can decide if the commented-out steps are what should happen and what she has implemented.
If everything is fine she will activate the correct steps, delete the bug-demonstrating-steps and un-tag the test. Ideally more tests will be added also.
On the other hand, whoever works on the bug and is aware of the tests can do the test-editing process before starting to fix the bug. So without any extra work the developer has a way to reproduce the bug over and over again just by running the tests. She can proceed with normal TDD, modifying code to make the test pass.
BTW: all the credit for coming up with that system to Phil Davis