DEV Community

Cover image for Everything That's Not Tested Will Break
Jan Wedel
Jan Wedel

Posted on • Edited on

Everything That's Not Tested Will Break

Full disclosure: I'm a big fan of testing, especially TDD. And, mostly, I am talking about professional software development (you actually get paid for what you code) This article is for all of those that either do not test or are not allowed to test. It still amazes me how many companies, departments or teams do not test code properly.

But why should I test? Because, everything that is not tested will break. It's just a matter of time.

To be very clear, I'm not saying, 100% test coverage is the goal because there is always some code that simply cannot break - like a simple entity class. But, everything else should be covered at some point. I'm also not saying that all untested code will have bugs, but once you have a bug, it's very likely to be located in untested pieces of your code base. Because of the fact that you don't know where the bug will happen, writing tests will reduce the probability of that happening.

To do that, there are plenty of battle-tested techniques I've seen so far:

Advanced Testing Techniques

Compiler-Driven-Design (CDD)

I mentioned that in my WTF article. When you are practicing CDD, you must some very optimistic person. You lookup stuff on Stack Overflow, copy-paste, modify and comment-out code until it compiles.

Job done. Commit. Let's watch some cat videos.

Believe-Based-Design (BBD)

This is a variant of CDD where you are very very sure that there are no bugs in your code by looking at it and because you ran it on your PC once. It will definitely run in production.

Let's watch some cat pictures and rely on the idea of...

Let the Customer Test

Great Idea! If you're lucky and a customer actually exists, let him test the software, finding and reporting bugs. Isn't that, what warranty is about? Just make sure that the customer feels bad because you reject bugs for not using your issue template.

Manual Testing

Now we're getting pro. You actually perform some testing yourself! How exciting?

While watching cat pictures, you click on the button you've just implemented and it works! Nice job. Let's implement the next feature.

After the customer called and you yelled at him for being stupid because you did test it, you hang up.

What you don't know by now is, that after implementing the second feature, you actually broke the new button code.

By learning it the hard way, you plan to test all the old features when a new one is implemented. You write down all the things to test as a reminder for the next time. Let's call it a test specification.

But as the product grows, it just takes too much time testing which you just cannot afford. You're manager yells at you because you're not writing features anymore.

Then, you're product is about to be released, the customer is waiting but you can't finish all features including tests. So you skip tests but you'll get a bad feeling that something's wrong.

And you're right, as soon as the customer uses your product, it breaks badly. To find the cause, it takes you a couple of days but it fixing it takes two weeks.

So what's the Deal?

We just saw some of the pitfalls of software development. What can we do? In my humble opinion: Testing.

But testing what?

Testing Scopes

There are different scopes you can test:

Testing Scope Abstraction Level Cost  Description
Unit Tests Low $ They cover "units" of code, usually but not necessarily a class. Unit tests cover most parts of the system, especially all the edge cases and error conditions as they are easier to model. It's like taking a magnifying glass and check welding spots of a car.
Integration Tests Medium $$ They take larger parts (i.e. multiple units) of the system as a whole. They usually test the interaction between multiple classes and external interfaces including HTTP, data bases, message queues etc. Those interfaces will be either mocked by in-memory and in-process replacements that allow running the tests on your computer without any external systems in-place. Integration tests can be sped up to run within a unit-test suite.They are like making sure, the doors actually fit into the chassis.
System Tests High $$$ They test the whole system as a black box by interacting with external interfaces as UI, HTTP, MQ etc. Those tests are the most expensive to write and they are longer running which is why they usually only test the main business use cases from end to end. It very important to have an environment identical or at least very close to your production system including server specs, OS, data base etc. System tests check, that your car is actually able to drive, that the breaks are actually stop the car when your push down the pedal without anything else breaking.
Acceptance Tests Feature $$$$ They cover especially the main use cases that are relevant to the customer or product owner. The idea is, that those tests can be written by the customer and once they are green, the feature is accepted. You could view it as something like: My car can get me from home to work in 30 minutes with a mileage of 8 l/100km (50 mpg).

There are couple of other tests, like performance, endurance and smoke tests, just to name a few, which I will not cover here.

From top to bottom, the effort and thus the cost to write tests usually increases and the level of detail decreases. This is mentioned in Martin Fowlers article Testing Pyramid.

Testing scope is an important but difficult beast to master. It does not make sense to cover 100% code in all scopes, it's just too much effort. So you always need to decide what to cover in what scope. It's always a trade off and a matter of experience but it's highly unlikely that you can build software in just a single scope. Since unit tests are much easier to write, cheaper and faster you would want to cover as much code as possible with unit tests (with an emphasis on possible because there is a reason why you need higher abstraction tests like integration tests).

See "Two unit tests and no integration test."

From the theoretical standpoint, I really like the idea of allowing a customer or product owner to write acceptance tests with a textual readable domain specific language. However, the efforts to allow this are quite hight and to be honest, we never actually used automated acceptance test. We did cover acceptance relevant cases in system tests.

Automation & Speed

The most important thing about testing is, it needs to be automated and it needs to be fast.

Automation helps to remove mistakes that humans do. It needs to be fast to allow running the tests at each and every code change to get immediate feedback. Unit test should run in milliseconds each, a few seconds for the whole suite. Integration tests should also run in a few seconds up to 30 seconds. System tests may run a few minutes, but only if it is really necessary.

Automation has the advance, that you can also run test coverage analysis that will show you uncovered lines or branches in you code. As stated above, do not try to achieve 100% code coverage. Use a coverage tool as a help to see that you might have missed something like some else branch, e.g, and apply some common sense.

Just as an indication, in my team, we usually have something between 60% and 80% coverage.

Regression Testing

Automation helps you especially with regression bugs. After implementing new features, modifications or refactoring, you can just let your test suite run to find out if you broke something that has worked before. This is a large difference compared to manual testing. Testing a single feature manually after or while developing it might seem enough. But from time to time, a new feature will break some old features.

You may have tested something very thoroughly and then, after everything seems OK, you just change one little innocent line of code or some configuration value that won't affect anything. But this also introduces bugs from time to time, especially when you don't expect it.

Documenting Code and APIs

I am not a big fan of code comments. I usually delete them whenever I find any and replace them by properly named variables, methods or classes. This is called clean code.
However, there is definitely the need to document your code in a way that is not redundant, not self-deprecating.
In my opinion, the solution here is testing. Write self-explanatory tests, name them properly and use the Given-When-Then form. The best thing to document a class or an api is actual code that uses that API. So why not put some effort into writing readable and executable tests that serve as specifications of the code. You can even generate REST API documentation from you tests including real requests, responses and parameter.

Reducing stress

To, me this is the most important and in my opinion most underrated benefit of an automated test suite. It will cover you when it gets stressful, when a release is close and when managers scream at you. Many times I have been in such situations. Very early in my career, I did not have a comprehensive test suite and I needed to quickly fix something that could have affected timing and may causes concurrency issues. I remember testing it very quickly, then shipping it to the customer, having drops of sweat on my forehead. I tried very hard to think about all the consequences it might have had in my code base what could have broken.
And obviously, it failed - in production.

A couple of years later, I was in a similar situation. However, I developed the whole software with TDD, resulting in a huge test suite. From the day I put the software in production, there were no bugs. After adding an urgent new feature, all tests were green and I was very relaxed when shipping it. Nothing broke.

Customer satisfaction & Maintenance Costs

Most customers and - unfortunately - most managers as well that are not familiar with software and software projects in particular would probably not want to spend money on testing if you ask them.
However, if you ask them "Should this piece of software be of high quality" most of them would probably say "yes". If you ask them, if they want to pay for maintenance, bug fixing in a warranty phase of a project or reduced production features in a product, they will say "no".
So, after a while, you learn a couple of things:

  1. Managers and customers expect high quality and bug free software no matter what they tell you before.
  2. To achieve that, you need to write tests.
  3. You should never ask to write tests. Just do it. It's just some background noise to managers and customers.
  4. If someone asks you, why a feature is not ready yet, never tell them "I'm currently writing tests, it took longer than I thought". Tell them the feature took longer than you thought because test and production code belong together.

To be very clear: I'm not telling you to trick someone. I strongly believe, that testing will save money over the course of a production and maintenance cycle. It will reduce the number of bugs in production, it will keep the customer happy which will probably order again and may be held as a reference. I will keep the manager happy, because of low maintenance costs.

Software Design

One big advantage to me is the resulting software design. I am talking about how classes, methods and interfaces look like. This benefit mostly comes from test-driven-design because it forces you to implement your code to comply to the tests you've written before. If you don't do TDD, you very often find the case where it is virtually impossible to test a certain class, method or code branch.

Downsides or Exceptions

Honestly, I don't see many. In fact the one thing is, that after refactoring code, a lot of test cases may not compile. This is annoying and maybe time-consuming but mostly trivial to fix. I would never trade the benefits of testing for not needing to fix tests after a refactoring.

What about exceptions to the rule? Do I test for side-projects? Mostly, yes, but not as thoroughly as I do at work. What about pilots or proof of concepts? Actually, I don't believe in such a thing. I've never seen any customer that really understands the consequences of "This is a PoC. No tests, we will throw aways all the code once we get the real order." They still expect bug-free software with less features at best. So, no matter what software you build, if it's for professional use (customers or in-house production), then you need testing.

And remember, what customers expect.

Conclusion

Writing tests must be mandatory, untested production code will fail and according to Murphy's law, it will fail in a very stressful situation where you don't have the time to think thoroughly.

Testing will benefit your software design, reduce stress, improve quality and customer satisfaction.

Start doing it. No excuses.

Originally posted on my blog return.co.de

Cover picture is stolen from this timeless tweet:

Thanks to @kylegalbraith for pointing me to Martin Fowlers article!

Top comments (31)

Collapse
 
jfrankcarr profile image
Frank Carr

The interesting thing about the Titanic is that the chief engineer, Thomas Andrews, repeatedly had his safety designs and testing plans overruled by White Star Lines sales and marketing. Andrews went down with the ship while Bruce Ismay, the managing director, got away on a life boat.

Things haven't really changed that much in 104 years when it comes to sales and upper management overruling the best laid plans of engineering teams.

Collapse
 
stealthmusic profile image
Jan Wedel

Thomas Andrews, repeatedly had his safety designs and testing plans overruled by White Star Lines sales and marketing

Sounds familiar :) Would it have helped to prevent the ship from sinking?

Collapse
 
jfrankcarr profile image
Frank Carr

Maybe and they certainly would have mitigated the extent of the disaster. Some of the things that were overruled included having 46 lifeboats instead of just 20, a double hull that might have prevented the hull from buckling during the collision to the degree it did and watertight bulkheads that would have gone up to B deck, thus preventing the section-by-section overflow flooding that happened.

It also could be said that the disaster was a classic instance of a "missing password" or "person hit by bus" scenario. The forward lookouts didn't have binoculars to use because the case was locked and the guy who had the key was no long on board the ship, having been reassigned to a different ship at the last minute. One good test to have is to ask the question what would happen if this key person is no longer available or if this component/service was no longer available?

Collapse
 
buntine profile image
Andrew Buntine

Good post. It was nice read. :)

Honestly, I don't see many. In fact the one thing is, that after refactoring code, a lot of test cases may not compile. This is annoying and maybe time-consuming but mostly trivial to fix.

I feel like this point needs to be given at least some more attention. I've seen people get sold into TDD as a silver bullet and inevitably the resulting dopamine rush received from the red-green refactor cycle renders them blind to the maintenance nightmare they are creating with excessive test suites that are super tightly coupled to their codebase, not to mention the many levels of indirection that are introduced in the name of testability.

To be clear, we all agree that teams absolutely should be writing tests and I am not at all claiming you are guilty of anything here. In fact, I am glad to see you've mentioned several times that 100% coverage should not be a goal!

But software development is all about balance and everything has trade-offs. We need to move past the notion that if one does not find TDD perfectly awesome then they are simply doing it wrong.

Collapse
 
stealthmusic profile image
Jan Wedel

You are right. I did not try to say, that whoever is not doing TDD is doing it wrong. My main point here was, that testing is important.
If you are an experienced developer, you might be perfectly capable of writing high quality testable and clean code and write tests afterwards do that’s fine. However, I learned the hard way that when you’re inexperienced, writing test afterwards and not trying to find the simplest solution, you end up with bad designed, overly complex untestable code that will eventually break. I use TDD as a tool for myself to write good code and that why we practice that in coding dojos every 3 weeks in my team.

Collapse
 
jhofm profile image
jhofm

I get your point. Especially in environments with automated builds that are coupled to tests, it can be strenuous to keep unit tests with lala land mocks aligned with the actual prod context. I like doing integration/system tests with minimal mocks first for that reason; they are ridiculously expensive to create, but in creating them you will learn basically every aspect of all involved systems just to make the tests work first. And that is a good thing! External system behaviour is much easier to understand if you have a test that deals with actual instances of that system and a debugger at hand instead of a production system and some logs. And that knowledge pays off; you can reuse tons of code in administrative tasks and interaction-breaking changes in those systems become immediately apparent, usually far beyond the required scope. Not hating on unit tests; test driven code is better code because it somewhat encourages the dev to think a bit more top-down about his own code. that said, every environment has it's own quirks, and having a basic set of system tests in place is a costly but valuable thing and always a good way to start if you try to clean up things.

Collapse
 
hilaberger92 profile image
Hila Berger

Hi Jan,
I really agree with what you're saying here.
In my experience, even the simplest method might break because of an annoying little bug that you didn't think about...
When you perform unit testing, do you use mocks or trying to avoid it?

Collapse
 
stealthmusic profile image
Jan Wedel

Hi Hila,

Yes, I’m definitely using mocks when needed. However, I’m using mocks only for very simple use cases, specifically defining return values . Sometimes it is tempting to do use a lot of verifications to check if your implementation actually does some calls. This is bad and very fragile because it reimplements the behavior of the class you test and will break as soon as you refactor something. It’s important to design methods in a functional way so you put something in and expect something out.
I even use mocking for integration tests. When I test Rest APIs, mostly I do the actual requests with some in-memory DB but probably mocking other internal services to throw exceptions for example.

Collapse
 
hilaberger92 profile image
Hila Berger

Hi Jan,
I didn't quite understand why using verifications to check if your implementation does some calls is a bad thing, can you elaborate on it?
Also, which mocking framework do you use?
Thanks a lot!

Thread Thread
 
stealthmusic profile image
Jan Wedel

I didn't quite understand why using verifications to check if your implementation does some calls is a bad thing, can you elaborate on it?

So first, it's not inherently bad. It depends on how and how often you use it.

Let me give you an example:

void methodA(int someInt, int someOtherInt) {
    objA.foo(someInt);
    objB.bar(someOtherInt);
}

When you write tests, that verifies calling both methods, especially in a specific order, you duplicate the implementation in your tests. Test should focus on checking results based on some input. That's what I means with "functional style". So I would always try not to have any such methods like the one above that only have side effects. Rather I would write methods like:

int methodA(int someInt) {
    int foo = objA.foo(someInt);
    int bar = objB.bar(someOtherInt);

    return foo + bar;
}

Now you can simply write a test for example

assertThat(instance.methodA(1 + 2)).isEqualTo(3);

When you figure out, that you can simplify your method by writing...

int methodA(int someInt, int someOtherInt) {
    return someInt + someOtherInt;
}

the test ist still green, because you tested the result, not the internal behaviour.

That's why TDD is so important. It forces you to design all of your classes, methods, functions in a way that they can easily be tested by simply putting some values in and expecting something out.

There are obviously exceptions to this "rule", but there should be a very good reason.

Side effects should only happen at the very boundary of the application, e.g. REST on one side and a DB on the other. All code in the middle can be written in a functional style without any side effects.

Also, which mocking framework do you use?

For Java: Mockito or Spring Test, see my other article for some examples on how to do integration tests: dev.to/stealthmusic/modern-java-de...

Thread Thread
 
hilaberger92 profile image
Hila Berger

Thanks a lot!
Have you heard about Typemock by any chance?

Thread Thread
 
stealthmusic profile image
Jan Wedel

Have you heard about Typemock by any chance?

Nope, not yet. I just looked it up, it's .NET and C/C++, right?

Thread Thread
 
hilaberger92 profile image
Hila Berger

Yes. My team and I are working with their product for .NET and we are satisfied. I wanted to hear from other people who use their product...I guess I'll just keep looking :)

Collapse
 
jvanbruegge profile image
Jan van Brügge

In my experience, you dont need unit tests if you have a somewhat decent compiler. You can then spend the time you saved writing a lot of annoying unit test to write more complicated integration tests. I usually make those property based, so I have the framework generate tests for me based on a mathematical property that should hold for any input. And the rest of the saved time goes into writing more end-to-end tests, that really test your whole service in complete

Collapse
 
stealthmusic profile image
Jan Wedel

I did not expect that someone actually does CDD ;)
Honestly, integration tests are great but much too expensive to test all code path. I actually love unit tests, they are very fast and give you immediate feedback in you development cycle. Hundreds of test run in a matter of seconds. You will never achieve that with integration tests.

Collapse
 
jvanbruegge profile image
Jan van Brügge

The problem with unit tests is that even 100% test coverage do not mean you have cought most bugs. A decent type system cuts down the surface of error by a lot.
I tend to disagree about cost. An integration test is just a unit test that tests multiple parts together. For me this means testing a whole component at once. Property bases testing helps a lot, because you can have a lot more tests than what you would write otherwise. For example for a form, you would test if inputting random data in the form gets verified and saved correctly. Then you generate 100 tests that will check that for you. In CI you can even run more to be sure.

The really expensive tests are end-to-end tests, where you spin up a database fill it with data and just run the tests by scripting interactions with your UI.

Thread Thread
 
stealthmusic profile image
Jan Wedel

There are two different types of bugs: unexpected behavior and wrong behavior (in a functional way).
The former may result in things like NPEs in languages like Java. That is definitely something that could be prevent in stronger typed languages like Haskell eg. however, the second type could still happen and then could happen in each and every line of code. Sure, you can write integration test but it’s much harder to cover all expenses edge cases than in unit tests. Using CI is obviously very important but mostly solves the problem of “works kn my machine”. It’s definitely way to slow to use it while implementing a feature. Then, you want feedback loops of 500ms-1s.

Collapse
 
bousquetn profile image
Nicolas Bousquet • Edited

A few remarks on my side:

  • In the industry, typically 10% of the issues are found in production by the client if you actually work as a professionnal and do all the above you described. While testing is necessary it is in no way make the software 100% bullet proof, far from that. I agree with you that you have to test anyway. You don't want 100% of the issues discovered by the client in production neither ;)
  • a quite significant portion of the issues comes from bad design rather than bad implementation. Unit test are 100% useless there and integration test help only partially because both are written uppon wrong assumptions. Acceptance tests are better but typically not enough because in many case people don't really understand what they need or what is really needed until they face the final product and try to use it for real. You want the client to really use/play with the application even when it isn't finished, you want to be sure you really understood the needs, work with expert in the field that know their job and so on. Theses issues are the most costly.
  • I do not agree with the direct costs of you various tests. Noticably, I find that integration test can often also serve as acceptance and non regression tests and are fast to write and do test a wide variety of features with quite low cost. On top, they are of higher level and less prone to changes after a refactoring of the internal design.

This isn't to say one shouldn't do all forms of testing, you should and unit test can be quite productive when writing the code (the int tests are not always yet available) but it was to add some throughts to the discussion

Collapse
 
stealthmusic profile image
Jan Wedel

Thanks for your great comments.

You want the client to really use/play with the application even when it isn't finished, you want to be sure you really understood the needs, work with expert in the field that know their job and so on.

This is actually a both true and interesting thought. I think working together with a customer and showing him/her the results iteratively could (and maybe should) be viewed as part of testing software.
From the view of a customer, it's not easy to understand the difference between "That's a bug" and "That's not what we wanted" and maybe there even shouldn't be a difference. I think I might add a section to my post :)

Noticably, I find that integration test can often also serve as acceptance and non regression tests and are fast to write and do test a wide variety of features with quite low cost.

In fact, we don't write real acceptance tests at all and us a lot of integration tests. When you use spring boot integration tests, there are quite fast. Despite that, the cost of writing and executing them is still higher than writing unit tests - but luckily not much anymore. I thing it depends on what system you build and what language you use. I worked for quite some time in automotive industry developing large C++ systems. The integration tests took 20min to run.

Collapse
 
bousquetn profile image
Nicolas Bousquet • Edited

From the view of a customer, it's not easy to understand the difference between "That's a bug" and "That's not what we wanted" and maybe there even shouldn't be a difference. I think I might add a section to my post :)

Depending of the organisation and contract there may be lot of difference. But to me what really count to consider the product as really successful is that it is really used in production and that it actually solve customers problem and make the happy.

It is said in the industry that about half of the project in computer science are failure. Often it can been see as budget issue: there no more money to pour on it.

Even if this purely cost, one can wonder why we started such project if it was so expensive and the client didn't want to afford it. Often it is cost was minimized and that to actually fix the actual software lot of money is needed without any proof that the thing is gona to be worth it. The software at is doesn't really solve enough of the customer problem to be worth the investment.

This is as a core of any project, we need to be clear on the objectives, if they are achievable and go toward that. If everything is a success but the software doesn't meet the original objectives, that's a failure. Not of one person but collectively.

I remember one guy saying that if by multiplying the estimated cost by 2 and by dividing the estimated revenue by 2 you don't see that you'll make lot of money the project likely isn't worth it. Because everybody is too optimistic.

Despite that, the cost of writing and executing them is still higher than writing unit tests - but luckily not much anymore. I thing it depends on what system you build and what language you use.

Yep, I think the cost depend extremely heavily on the technology used, both how smart one write his tests but also how easy the technology and also the system overall make it easy to test or not.

Often not enought is invested to improve the tooling, speeding its run, train people to use it efficiantly and so on.

What I can say ? The cost of unit test vary a lot. But there 2 costs, people working on it, maintaining it cost and the cost of machines running the tests.

To me even in the country with the lowest wages, developpers, software engineer are extremely costly. They don't like to do things repeatly and their motivation lower significantly if you force them. Also the more people are needed to achieve something, the more is spent in communication, training, knowledge sharing and so on and the less doing the real stuff. Cost of software is exponential.

So one should invest a lot in the capacity to automate everything, to get thing done on cluster of slave machines so nobody has to wait and there no reason to not do things in parallel if a single machine is too slow. In big projects people may waste most of their time in validation, investing there can have huge impact on individual and so overall company productivity.

Getting a cluster of machine doing the validation is cheap compared to people. Being able to have a test system like production shall be easy as anyway in cloud day everything is automated and you have the script to deploy the whole system automatically scale it up and down, again automatically.

While you don't want to maybe push something that hasn't been tested and waste hours of CPU/machines for no reason and to go back to it 20 times and have long iterations, you can layer your testing, runing the few tests that matter locally and fast, and get the additional confidence from the testing infrastructure.

Thread Thread
 
sturzl profile image
Avery • Edited

Thank you for taking the time to write such a great comment!

Collapse
 
_gdelgado profile image
Gio

It is a bit extreme to assume tests are the pinnacle of high-quality software, or the sole reason that your project will succeed. Tests are one of many pieces required to have a high-quality system and are not more important than any of the other pieces.

I'm (and the company I work at) a big advocate for type systems and leveraging them to have sane and logical representations of our systems' states. This reduces the need for tests DRAMATICALLY. i.e. Rather than writing a unit test to ensure some non-sensical state pops up. Why not leverage the typesystem to make non-sensical states impossible to represent in the first place?

Check out this post discussing type systems to help avoid mistakes.

Also, there's a good talk from the Elm community about making non-sensical states impossible to represent (link) - again reducing the need for tests.

Collapse
 
stealthmusic profile image
Jan Wedel • Edited

Yes, I absolutely agree that type systems helps a lot. I would actually not use a dynamically typed language for any critical system. This actually increases the number of tests and his right to an unbearable amount. I did that once...

Actually, I’m also a big fan of state machines. I am an electrical engineer and I was working for years in embedded projects. I actually replaced a lot of code that had bugs frequently to use proper state machines. However, because we use Java I needed to write at least a couple of tests.
So, there is always some nice language that may improves your code or reduces the number of tests you need to write. I actually like Haskell and Erlang for different reasons very much. But I would not use them for professional projects because I would not be able to hire any developers in my team. For my professional life I need to stick with Java which is at least not dynamically typed although there is a lot to be improved...

Collapse
 
kylegalbraith profile image
Kyle Galbraith

Nice write up Jan. I think the importance of automated testing cannot be stressed enough. You briefly mention it here, but it is worth pointing folks to the the testing pyramid. I see a lot of development teams putting a ton of effort into the top of the pyramid. This is expensive and often times slow. Focusing on the bottom of the pyramid and building out fast and coherent unit tests can really accelerate development.

Collapse
 
stealthmusic profile image
Jan Wedel

Hey Kyle,

thanks for your feedback. This is interesting. I actually decided not to look up any existing literature and I haven't read this article of Martin. But I completely agree with the idea of the testing pyramid. Maybe I will add this link and put some more emphasis on it!

Collapse
 
alainvanhout profile image
Alain Van Hout

I think API tests might also deserve a place on that list. I count those as separate from (typical) integration tests because API tests can (i.e. its possible to) be executed against a regular running version of your (web) application, while integration testing generally involves setup and bootstrapping, and tends to be quite resource intensive.

Collapse
 
stealthmusic profile image
Jan Wedel

Yep, as stated it’s not a comprehensive list. Personally, we use integration for the APIs as well. In fact we implement them test-driven as well and we use Spring REST Docs to create API documentation from our tests.

Collapse
 
revskill10 profile image
Truong Hoang Dung

The actual reason for people bothering with testing is not the quality of code or design, but it's the ability to refactor the design. I mean ability to change existing system without breaking it.

If you don't need to refactor your code, test is not needed.

Collapse
 
stealthmusic profile image
Jan Wedel

Hi!

That might be your reason but it’s definitely not all people’s reason, I can assure you.
The improved design certainly also help maintainability but if you wouldn’t do testing and/or test driven design, you would first see your code in action in production. And you’ll find all the errors there which is very embarrassing for every professional Software Developer, at least that’s my point of view.

Collapse
 
scofieldidehen profile image
Scofield Idehen

awesome