Fun fact: someone taught me that early on in my career as a best practice to make sure that my tests were actually doing something.
These days I try to always start by writing my test with an assertion that is guaranteed to fail, run the tests, verify that it does indeed fail, then rewrite the test to pass.
Test Driven Development practitioner (I'm not) would probably sigh (at least) reading this. According to TDD you should first write your failing (obviously) test, and then write the code that would make this test pass. This way, checking that tests are actually doing something is part of the workflow.
Your list is great :-) I'm possibly old and cynical these days but I'd maybe add '...and nothing seems to be getting much better'.
On the testing side - I've recently started running 'mutation tests' on my code (infection.github.io/ in particular). It's been quite eye-opening and a little depressing watching it flip 'true' to 'false' all over the place and tests still passing :-/. In my (feeble) defence, a lot of the time it's because in one test I'll be checking that related-fieldX has changed the way I expect and in another checking the true/false aspect - but it's still pretty eye-opening and has caught a lot of things I'd otherwise have missed :-)
When I'm assigned a bug and there's not unit test that covers it, the first thing I do is write the test that fails, and then go about fixing it. And usually along the way discover additional cases that would cause similar failure that hadn't been covered yet.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Fun fact: someone taught me that early on in my career as a best practice to make sure that my tests were actually doing something.
These days I try to always start by writing my test with an assertion that is guaranteed to fail, run the tests, verify that it does indeed fail, then rewrite the test to pass.
Or delete some parts of the implementation and see if some tests fail. I sometimes split commits in test and implementation so I see failing pipeline.
My workflow is almost always:
Thatβs why you should
Test Driven Development practitioner (I'm not) would probably sigh (at least) reading this. According to TDD you should first write your failing (obviously) test, and then write the code that would make this test pass. This way, checking that tests are actually doing something is part of the workflow.
Your list is great :-) I'm possibly old and cynical these days but I'd maybe add '...and nothing seems to be getting much better'.
On the testing side - I've recently started running 'mutation tests' on my code (infection.github.io/ in particular). It's been quite eye-opening and a little depressing watching it flip 'true' to 'false' all over the place and tests still passing :-/. In my (feeble) defence, a lot of the time it's because in one test I'll be checking that related-fieldX has changed the way I expect and in another checking the true/false aspect - but it's still pretty eye-opening and has caught a lot of things I'd otherwise have missed :-)
...or stryker-mutator.io
When I'm assigned a bug and there's not unit test that covers it, the first thing I do is write the test that fails, and then go about fixing it. And usually along the way discover additional cases that would cause similar failure that hadn't been covered yet.