Cover image for Me: "I tested and it works, why do I have to write tests?"

Me: "I tested and it works, why do I have to write tests?"

briwa profile image briwa ・4 min read

I started out programming in PHP and Javascript when I was studying in university. Back then, I knew nothing about any kinds of software testing. The only way to make sure that the code works is to test them manually in the browser. I would imagine that bigger companies with more complex software would have a dozen of people lining up in front of their software interface, testing them manually.

I remember there was even a spreadsheet, filled with rows of use cases, with checkboxes to tick before launch. Even with that, the bugs still appeared. When I shipped the code, I did manual testing on several cases that I was aware of, but there is no way I could test all the possible cases that the app would allow. It was one of the most counter-productive moments in my development career.

Then I discovered software testing. It was such a blessing. The way it covers use cases systematically, maintain the app stability, and how it is done automatically with Continuous Integration truly amazed me. No more spreadsheet with millions of rows, and I can ship code with more confidence.

In my opinion, the concept of testing is still hard to accept to some, even now. I can understand that this is mostly fear of change; trying to get people out of the comfort zone of "I coded it and it works" is not an easy task. I remember a few arguments against testing:

  • "I have confidence in my code, it won't break."
  • "I tested all the use cases, they're fine."
  • "You can review my code, point out the flaws, and if there's none it should be good to go."

All the arguments came from the "human" factor of the developer: their code confidence, their manual testing skills, their flawless code. And it is important; nothing to take away from that. Software testing, on the other hand, covers the cases when the "human" factor is failing. Nobody is perfect, mistakes are bound to happen and we learn from the experience. When you write the code, or even when people review the code, and some use cases/flaws are being overlooked, the test might be able to point that out.

And not just about maintaining stability and the quality of the code. Tests define the specification of the code. I was just being dense, but it just occurred to me as to why tests are called "specs" in the first place. When writing tests, it is made very clear from the start on what the code can and cannot do. Recently, I even started developing a habit of browsing the test folder of a library to find out what it does when I can't find them in their API/demo page, and it does really help.

Even with all the good stuff about testing being laid out, getting people on-boarded is definitely another problem. I can see how people are still treating tests as "something that would allow me to merge PRs". Some would probably still say "the feature works and I tested it, we can merge it first and I'll write the test later". Others would complain that "lots of tests failing because I changed one line of code".

In my opinion, they are missing the big picture. If you make changes on the code, of course you have to make all tests passing because you're making sure that your code is still within the specification of the code (the test). If you add a new code, you have to add tests because you're adding new specifications to the software. If you're shipping a code without a test, that would mean you're shipping code without a specification; nothing is set in stone about what works and what doesn't, including bugs.

There are reasons why most open source repositories are taking tests very seriously: because they care so much about their code stability. One single bug could put lots of people at stake, the devs and its users. PRs with failing builds or no tests wouldn't even be reviewed. I would say it shouldn't be that much different compared to our own (for-profit, maybe?) projects. It might be hard to convince clients/stakeholders that testing is important, but at least it shouldn't be made that hard for the devs.

This article might not apply to you. I am aware that there are many practices that integrate testing into their workflow (e.g. Agile), and a lot of companies have already been doing that. But, if it does apply to you, I hope this article provides another point of view of what testing is and why it is important, and maybe you can finally get rid of that giant checkbox-ridden spreadsheet.

As always, thanks for reading my article!

Cover image by Glenn Carstens-Peters on Unsplash.

Posted on by:

briwa profile



Still getting a hang of this dev.to...


Editor guide

I think the simplest way of explaining it to someone would be something like,

"once you're working on something else, and a new recruit takes over maintenance of this code, are you confident that they won't break it by making what looks to them like a simple change? If that happens, everyone will blame your code..."


True that. Same goes when you leave the company, without tests (and also docs), it would be hard to maintain the code to stay stable.


The back end team at my work is pretty strongly against testing and I've never understood it. There's nothing more frustrating than a back end dev coming to me saying that the app is broken and they think it's the front end. If they had tests, they'd know it isn't, but I have to spend my time debugging their problem because they have no testing standards in place. Our QA team's job consists of lots and lots of manual testing (ie: over 1750 test cases when regression testing just one of our larger apps) and when they find something that's broken, no one knows if it resides on the front or back end, because there's no unit tests to tell them.

I've been implementing a lot of testing lately into our front end builds, but when there's nothing being doing on the back end, finding bugs can take weeks, which seems ridiculous when the tradeoff of writing tests might take a little more time, but at least we'll have confidence in finding issues when they arise, instead of going on a wild goose chase every time :'(

I think a lot of the mindset arises from the, "We're using a compiled language, it'll find the bugs for us," which is completely untrue. This is one of the reasons I struggle with the merits of TypeScript on the front end because it's not very often that my team creates bugs that actually have to do with types. Hiding behind the language as a means of testing is not a valid testing strategy.


Some of the opposition to testing comes down to laziness and an inability to take responsibility for bugs.

RE: laziness:

Writing test code takes 15-30% more initial investment into code, and is only useful for the x% of code which ends up causing bugs. So most individual contributors would have to put in more hours to complete the same amount of work from management. Additionally, figuring out how to test some units is not trivial, and would require learning tangential to completing tasks. And when there is a QA department to handle the more frustrating part of programming, debugging and checking for errors, it is easier for the individual contributor to offload part of their work onto somebody else.

RE: taking responsibility

Having tests break means that the source of bugs becomes more obvious. When there is an opaque bug search process, it is easier for contributors to cover their own asses, and covertly patch bugs. In office politics, perception is reality; if the backend developers are willing to cast the blame onto the front end or API clients for bugs, it looks better for the backend. Even if the bug ends up being proven as a backend bug, the initial blame damage has been done, and managers convinced who to blame first in the future.

The only arguments I've seen work are

  1. Management recognizes, supports, and rewards quality; Quality is attained in part through testing.
  2. Most show stopping, up till 4am, bugs are caused by poor code maintenance, and the only way to get out of stressful firefighting mode is through rigorous testing. (This argument doesn't work if the Developer enjoys the thrill or has no life outside work.)

This makes me sad. I'm sorry your back end team is so backwards. 😟


When you write tests, it's not about you really. It's your way of communicating to some other developer down the road what the intended behaviour is. They're like really good comments, but executable.


(Tests) are like really good comments, but executable

First time I've heard about this, and I liked the idea! 👍🏻


I believe in learning from experience. If someone questions why he/she should write automatic tests, I'd task him with running regression tests on every deploy for an entire Sprint. I believe they'll never question the usefulness of automatic tests.


I would write tests but the project I am working on at work had no tests when I joined the point and is too big to go back and write them and my personal projects seem too small to benefit from the effort at the moment . My big issue with tests is "how do you ensure them test themselves are correct and 100% accurate. As tests written by humans with have errors and bug at some point. do we write tests for tests themselves ?"


Writing (automated) tests starts with one small step - the desire to write them. If you can't state precisely what your code is mesnt to do (by writing a test) how can you even start coding? Once you have written a few, you will start to notice that your features get bounced by QA less often - and a 'virtuous cycle' starts.
Writing tests is a habit, which takes practice. You could try practicing on your personal projects.


how do you ensure them test themselves are correct and 100% accurate

If you mean how to avoid making false positives in test, of course tests like any other code should be peer-reviewed by your teammates. And most importantly, you can check the code coverage for your unit tests so that you know that the tests are actually covering your branches and statements of the code accurately.

As for dealing with codebase that has no tests, I feel you. I've been there before. What was tricky is actually splitting the code so that it's testable, between pure functions and side effects. That should be the first step even before writing any tests. Then, start increasing test coverage gradually, starting with helpers/small pure function modules down to components. FWIW my experience is from a frontend codebase, but I hope you get the gist of it.

Small project, prototypes, yes maybe your argument about not having tests is valid for these. But, as soon as those small projects are used by someone else (maybe you made it open-source), and/or has the possibility to get complex, then that's where tests can be beneficial.


I often found bugs while I thought it worked fine while adding tests. I often keep adding tests till almost all the code is covered. Unless purely a passion project, or a proper proof of concept, I can't imagine not writing tests.


Yes, writing tests helps me "rediscover" my own code as well: optimization, finding bugs, making it DRY-er.


In my opinion, tests are necessary but can also go to the extreme. The goal should be "write more stable code." with one of the strategies being writing tests. I've seen goals of "100% code coverage" which isn't really a goal.

I always write tests for business logic and algorithms. It's fantastic when you're refactoring, protecting against regressions, and not having to jump through a bunch of manual testing steps. Tests also help me keep my code lean, small functions are not only easier test, they're also easier to read and write. I also write tests for any specific bugs that crop up, again, regression protection.

Tests do have tech debt overhead and for large projects a significant time to run cost; but in my experience, the positives far outweigh the cost.


I've seen goals of "100% code coverage" which isn't really a goal.

True that. I've only gone for 100% test coverage for libraries, mostly pure functions. There are parts of code that may have a case for not being tested in actual projects.

It's fantastic when you're refactoring, protecting against regressions, and not having to jump through a bunch of manual testing steps

This exactly! One of the best moments was when I did an overhaul to a well-tested code, keeping the interface so that it is still within the specs but completely rewriting/improving the code, then seeing the tests passing still, that is such a wonderful feeling.

I also write tests for any specific bugs that crop up, again, regression protection.

Yes. IMO regressions are parts of code that we didn't cover, you could say that a particular area of the specification was missing. So it makes sense to make sure that it is covered by writing a test for it.


This exactly! One of the best moments was when I did an overhaul to a well-tested code, keeping the interface so that it is still within the specs but completely rewriting/improving the code, then seeing the tests passing still, that is such a wonderful feeling.

I came down to the comments to say this, you beat me to it hehe.

Tests used in this way are like the checkboxes you mentioned in the article. Even if you think you remembered all the use cases, there might be one that you came across before but forgot about during the refactor. If you wrote a test for that use case, the failing test would remind you of that case instead of a faulty build


Another sad example...

situation: integration tests are done manually, only unit tests exist
developers: we have started writing a tool for integration tests, but it needs more work and it should be mandatory to be used in all projects
management: let's do an audit first, I still know some guys at this big consulting company
consulting guys: we checked your paperwork and there seem to be no integration tests
management: alright, from now on there's a lot of paperwork mandatory for every manual integration test


Ha, yup, another reason to test your code properly and maintain the best code quality: one day it's going to get audited...


I would tell them that the next person that has no idea if it works, and made a change and need to retest, that the creater of the code should probably create these tests since they know what their code does.


True that. To me, writing a test without a context (especially if the code isn't yours/not well documented) is a counter-productive process. There are potential mistakes waiting at the end: false positives, over-testing...


Even ignoring the obvious benefits of writing tests, testing manually is simply a pain.

As the lazy developers that we are, once we realise that writing a few lines of code is the simpler option, it becomes a no brainer : )


Mr. Briwa can i ask you something ?
Before you write a code do you consider all the testing that you need to make?


Sure. Ideally, yes, you can start with the tests, maybe unit tests for example, then write the code. This would define what your code can do, so that the scope is clear from the start. In fact, this is what TDD is trying to achieve; start with specifying the requirements through the test, write the code so that the test passes, refactor if needed while keeping the tests passing, and repeat.

Also, if it has something to do with side effects or dealing with end users (e.g. browsers, devices), I would add end-to-end testing at the end, to make sure all components integrate properly and to test as close as how the users are using the app. This is only when my unit tests have covered everything modularly.


Thank you sir :) I'm new when it comes to testing.


Tests are great, but if you're changing one line of code and breaking many tests,you may have hit the test fixup antipattern.


People use snapshot testing for convenience's sake, as opposed to line-by-line assertions, and it comes with a price. What you described in your post to me are the tradeoffs for the convenience. In my opinion, every single line of the snapshots are the test assertions. It basically specifies that the component should be exactly as it is defined in the snapshots. If you've been updating snapshots only because it failed, well, you shouldn't. You need to know why and all the failings have to be reviewed properly, much like you review your other non-snapshot test assertions.

As for changing one line causing many tests to fail, in my opinion, that is because the tests specify it that way (even in your snapshot testing's case). In my example, my idea of the context was that it could be because it was changing the interface (e.g. a sum function now takes in an array as the single argument instead of multiple numbers as multiple arguments), so all the tests for that code would fail anyway. My point is more on the mindset of complaining because fixing tests seems like an extra work, while in fact it is part of the work (updating specifications).

Patterns, best practices, what to do/not to do in testing might not be the scope of this article, but thanks for bringing that up anyway!


Good article, also can someone recommend some good resources to learn the principles of testing? Thank you.


My go-to for learning resources and references would be the Awesome List, and one of them is about testing: github.com/TheJambo/awesome-testing. Although I would say I'm not strong on the fundamentals of testing itself, more to the practical usage of it, so I can't comment much on this. If anyone has any books to recommend, let both of us know!

P.S.: If you're a frontend developer, this video is one of my favorites on how to test your components (the context is Vue, but there are points in the video that are applicable to any other frontend framework).


Implement the tests is really good and it helps to deliver software faster and reliable. But the good approach devs should start with TDD. This is the good way to develop the software. Should not make the tests and functionality codes have a lot dependencies. The bad Software which the software has a lot of dependencies.


Well structured test along with high % test coverage makes it feel so much less stressful to add/remove/change functionality.