DEV Community

Cover image for Everything That's Not Tested Will Break

Everything That's Not Tested Will Break

Jan Wedel on May 05, 2018

Full disclosure: I'm a big fan of testing, especially TDD. And, mostly, I am talking about professional software development (you actually get paid...
Collapse
 
jfrankcarr profile image
Frank Carr

The interesting thing about the Titanic is that the chief engineer, Thomas Andrews, repeatedly had his safety designs and testing plans overruled by White Star Lines sales and marketing. Andrews went down with the ship while Bruce Ismay, the managing director, got away on a life boat.

Things haven't really changed that much in 104 years when it comes to sales and upper management overruling the best laid plans of engineering teams.

Collapse
 
stealthmusic profile image
Jan Wedel

Thomas Andrews, repeatedly had his safety designs and testing plans overruled by White Star Lines sales and marketing

Sounds familiar :) Would it have helped to prevent the ship from sinking?

Collapse
 
jfrankcarr profile image
Frank Carr

Maybe and they certainly would have mitigated the extent of the disaster. Some of the things that were overruled included having 46 lifeboats instead of just 20, a double hull that might have prevented the hull from buckling during the collision to the degree it did and watertight bulkheads that would have gone up to B deck, thus preventing the section-by-section overflow flooding that happened.

It also could be said that the disaster was a classic instance of a "missing password" or "person hit by bus" scenario. The forward lookouts didn't have binoculars to use because the case was locked and the guy who had the key was no long on board the ship, having been reassigned to a different ship at the last minute. One good test to have is to ask the question what would happen if this key person is no longer available or if this component/service was no longer available?

Collapse
 
buntine profile image
Andrew Buntine

Good post. It was nice read. :)

Honestly, I don't see many. In fact the one thing is, that after refactoring code, a lot of test cases may not compile. This is annoying and maybe time-consuming but mostly trivial to fix.

I feel like this point needs to be given at least some more attention. I've seen people get sold into TDD as a silver bullet and inevitably the resulting dopamine rush received from the red-green refactor cycle renders them blind to the maintenance nightmare they are creating with excessive test suites that are super tightly coupled to their codebase, not to mention the many levels of indirection that are introduced in the name of testability.

To be clear, we all agree that teams absolutely should be writing tests and I am not at all claiming you are guilty of anything here. In fact, I am glad to see you've mentioned several times that 100% coverage should not be a goal!

But software development is all about balance and everything has trade-offs. We need to move past the notion that if one does not find TDD perfectly awesome then they are simply doing it wrong.

Collapse
 
stealthmusic profile image
Jan Wedel

You are right. I did not try to say, that whoever is not doing TDD is doing it wrong. My main point here was, that testing is important.
If you are an experienced developer, you might be perfectly capable of writing high quality testable and clean code and write tests afterwards do that’s fine. However, I learned the hard way that when you’re inexperienced, writing test afterwards and not trying to find the simplest solution, you end up with bad designed, overly complex untestable code that will eventually break. I use TDD as a tool for myself to write good code and that why we practice that in coding dojos every 3 weeks in my team.

Collapse
 
jhofm profile image
jhofm

I get your point. Especially in environments with automated builds that are coupled to tests, it can be strenuous to keep unit tests with lala land mocks aligned with the actual prod context. I like doing integration/system tests with minimal mocks first for that reason; they are ridiculously expensive to create, but in creating them you will learn basically every aspect of all involved systems just to make the tests work first. And that is a good thing! External system behaviour is much easier to understand if you have a test that deals with actual instances of that system and a debugger at hand instead of a production system and some logs. And that knowledge pays off; you can reuse tons of code in administrative tasks and interaction-breaking changes in those systems become immediately apparent, usually far beyond the required scope. Not hating on unit tests; test driven code is better code because it somewhat encourages the dev to think a bit more top-down about his own code. that said, every environment has it's own quirks, and having a basic set of system tests in place is a costly but valuable thing and always a good way to start if you try to clean up things.

Collapse
 
hilaberger92 profile image
Hila Berger

Hi Jan,
I really agree with what you're saying here.
In my experience, even the simplest method might break because of an annoying little bug that you didn't think about...
When you perform unit testing, do you use mocks or trying to avoid it?

Collapse
 
stealthmusic profile image
Jan Wedel

Hi Hila,

Yes, I’m definitely using mocks when needed. However, I’m using mocks only for very simple use cases, specifically defining return values . Sometimes it is tempting to do use a lot of verifications to check if your implementation actually does some calls. This is bad and very fragile because it reimplements the behavior of the class you test and will break as soon as you refactor something. It’s important to design methods in a functional way so you put something in and expect something out.
I even use mocking for integration tests. When I test Rest APIs, mostly I do the actual requests with some in-memory DB but probably mocking other internal services to throw exceptions for example.

Collapse
 
hilaberger92 profile image
Hila Berger

Hi Jan,
I didn't quite understand why using verifications to check if your implementation does some calls is a bad thing, can you elaborate on it?
Also, which mocking framework do you use?
Thanks a lot!

Thread Thread
 
stealthmusic profile image
Jan Wedel

I didn't quite understand why using verifications to check if your implementation does some calls is a bad thing, can you elaborate on it?

So first, it's not inherently bad. It depends on how and how often you use it.

Let me give you an example:

void methodA(int someInt, int someOtherInt) {
    objA.foo(someInt);
    objB.bar(someOtherInt);
}

When you write tests, that verifies calling both methods, especially in a specific order, you duplicate the implementation in your tests. Test should focus on checking results based on some input. That's what I means with "functional style". So I would always try not to have any such methods like the one above that only have side effects. Rather I would write methods like:

int methodA(int someInt) {
    int foo = objA.foo(someInt);
    int bar = objB.bar(someOtherInt);

    return foo + bar;
}

Now you can simply write a test for example

assertThat(instance.methodA(1 + 2)).isEqualTo(3);

When you figure out, that you can simplify your method by writing...

int methodA(int someInt, int someOtherInt) {
    return someInt + someOtherInt;
}

the test ist still green, because you tested the result, not the internal behaviour.

That's why TDD is so important. It forces you to design all of your classes, methods, functions in a way that they can easily be tested by simply putting some values in and expecting something out.

There are obviously exceptions to this "rule", but there should be a very good reason.

Side effects should only happen at the very boundary of the application, e.g. REST on one side and a DB on the other. All code in the middle can be written in a functional style without any side effects.

Also, which mocking framework do you use?

For Java: Mockito or Spring Test, see my other article for some examples on how to do integration tests: dev.to/stealthmusic/modern-java-de...

Thread Thread
 
hilaberger92 profile image
Hila Berger

Thanks a lot!
Have you heard about Typemock by any chance?

Thread Thread
 
stealthmusic profile image
Jan Wedel

Have you heard about Typemock by any chance?

Nope, not yet. I just looked it up, it's .NET and C/C++, right?

Thread Thread
 
hilaberger92 profile image
Hila Berger

Yes. My team and I are working with their product for .NET and we are satisfied. I wanted to hear from other people who use their product...I guess I'll just keep looking :)

Collapse
 
jvanbruegge profile image
Jan van Brügge

In my experience, you dont need unit tests if you have a somewhat decent compiler. You can then spend the time you saved writing a lot of annoying unit test to write more complicated integration tests. I usually make those property based, so I have the framework generate tests for me based on a mathematical property that should hold for any input. And the rest of the saved time goes into writing more end-to-end tests, that really test your whole service in complete

Collapse
 
stealthmusic profile image
Jan Wedel

I did not expect that someone actually does CDD ;)
Honestly, integration tests are great but much too expensive to test all code path. I actually love unit tests, they are very fast and give you immediate feedback in you development cycle. Hundreds of test run in a matter of seconds. You will never achieve that with integration tests.

Collapse
 
jvanbruegge profile image
Jan van Brügge

The problem with unit tests is that even 100% test coverage do not mean you have cought most bugs. A decent type system cuts down the surface of error by a lot.
I tend to disagree about cost. An integration test is just a unit test that tests multiple parts together. For me this means testing a whole component at once. Property bases testing helps a lot, because you can have a lot more tests than what you would write otherwise. For example for a form, you would test if inputting random data in the form gets verified and saved correctly. Then you generate 100 tests that will check that for you. In CI you can even run more to be sure.

The really expensive tests are end-to-end tests, where you spin up a database fill it with data and just run the tests by scripting interactions with your UI.

Thread Thread
 
stealthmusic profile image
Jan Wedel

There are two different types of bugs: unexpected behavior and wrong behavior (in a functional way).
The former may result in things like NPEs in languages like Java. That is definitely something that could be prevent in stronger typed languages like Haskell eg. however, the second type could still happen and then could happen in each and every line of code. Sure, you can write integration test but it’s much harder to cover all expenses edge cases than in unit tests. Using CI is obviously very important but mostly solves the problem of “works kn my machine”. It’s definitely way to slow to use it while implementing a feature. Then, you want feedback loops of 500ms-1s.

Collapse
 
bousquetn profile image
Nicolas Bousquet • Edited

A few remarks on my side:

  • In the industry, typically 10% of the issues are found in production by the client if you actually work as a professionnal and do all the above you described. While testing is necessary it is in no way make the software 100% bullet proof, far from that. I agree with you that you have to test anyway. You don't want 100% of the issues discovered by the client in production neither ;)
  • a quite significant portion of the issues comes from bad design rather than bad implementation. Unit test are 100% useless there and integration test help only partially because both are written uppon wrong assumptions. Acceptance tests are better but typically not enough because in many case people don't really understand what they need or what is really needed until they face the final product and try to use it for real. You want the client to really use/play with the application even when it isn't finished, you want to be sure you really understood the needs, work with expert in the field that know their job and so on. Theses issues are the most costly.
  • I do not agree with the direct costs of you various tests. Noticably, I find that integration test can often also serve as acceptance and non regression tests and are fast to write and do test a wide variety of features with quite low cost. On top, they are of higher level and less prone to changes after a refactoring of the internal design.

This isn't to say one shouldn't do all forms of testing, you should and unit test can be quite productive when writing the code (the int tests are not always yet available) but it was to add some throughts to the discussion

Collapse
 
stealthmusic profile image
Jan Wedel

Thanks for your great comments.

You want the client to really use/play with the application even when it isn't finished, you want to be sure you really understood the needs, work with expert in the field that know their job and so on.

This is actually a both true and interesting thought. I think working together with a customer and showing him/her the results iteratively could (and maybe should) be viewed as part of testing software.
From the view of a customer, it's not easy to understand the difference between "That's a bug" and "That's not what we wanted" and maybe there even shouldn't be a difference. I think I might add a section to my post :)

Noticably, I find that integration test can often also serve as acceptance and non regression tests and are fast to write and do test a wide variety of features with quite low cost.

In fact, we don't write real acceptance tests at all and us a lot of integration tests. When you use spring boot integration tests, there are quite fast. Despite that, the cost of writing and executing them is still higher than writing unit tests - but luckily not much anymore. I thing it depends on what system you build and what language you use. I worked for quite some time in automotive industry developing large C++ systems. The integration tests took 20min to run.

Collapse
 
bousquetn profile image
Nicolas Bousquet • Edited

From the view of a customer, it's not easy to understand the difference between "That's a bug" and "That's not what we wanted" and maybe there even shouldn't be a difference. I think I might add a section to my post :)

Depending of the organisation and contract there may be lot of difference. But to me what really count to consider the product as really successful is that it is really used in production and that it actually solve customers problem and make the happy.

It is said in the industry that about half of the project in computer science are failure. Often it can been see as budget issue: there no more money to pour on it.

Even if this purely cost, one can wonder why we started such project if it was so expensive and the client didn't want to afford it. Often it is cost was minimized and that to actually fix the actual software lot of money is needed without any proof that the thing is gona to be worth it. The software at is doesn't really solve enough of the customer problem to be worth the investment.

This is as a core of any project, we need to be clear on the objectives, if they are achievable and go toward that. If everything is a success but the software doesn't meet the original objectives, that's a failure. Not of one person but collectively.

I remember one guy saying that if by multiplying the estimated cost by 2 and by dividing the estimated revenue by 2 you don't see that you'll make lot of money the project likely isn't worth it. Because everybody is too optimistic.

Despite that, the cost of writing and executing them is still higher than writing unit tests - but luckily not much anymore. I thing it depends on what system you build and what language you use.

Yep, I think the cost depend extremely heavily on the technology used, both how smart one write his tests but also how easy the technology and also the system overall make it easy to test or not.

Often not enought is invested to improve the tooling, speeding its run, train people to use it efficiantly and so on.

What I can say ? The cost of unit test vary a lot. But there 2 costs, people working on it, maintaining it cost and the cost of machines running the tests.

To me even in the country with the lowest wages, developpers, software engineer are extremely costly. They don't like to do things repeatly and their motivation lower significantly if you force them. Also the more people are needed to achieve something, the more is spent in communication, training, knowledge sharing and so on and the less doing the real stuff. Cost of software is exponential.

So one should invest a lot in the capacity to automate everything, to get thing done on cluster of slave machines so nobody has to wait and there no reason to not do things in parallel if a single machine is too slow. In big projects people may waste most of their time in validation, investing there can have huge impact on individual and so overall company productivity.

Getting a cluster of machine doing the validation is cheap compared to people. Being able to have a test system like production shall be easy as anyway in cloud day everything is automated and you have the script to deploy the whole system automatically scale it up and down, again automatically.

While you don't want to maybe push something that hasn't been tested and waste hours of CPU/machines for no reason and to go back to it 20 times and have long iterations, you can layer your testing, runing the few tests that matter locally and fast, and get the additional confidence from the testing infrastructure.

Thread Thread
 
sturzl profile image
Avery • Edited

Thank you for taking the time to write such a great comment!

Collapse
 
_gdelgado profile image
Gio

It is a bit extreme to assume tests are the pinnacle of high-quality software, or the sole reason that your project will succeed. Tests are one of many pieces required to have a high-quality system and are not more important than any of the other pieces.

I'm (and the company I work at) a big advocate for type systems and leveraging them to have sane and logical representations of our systems' states. This reduces the need for tests DRAMATICALLY. i.e. Rather than writing a unit test to ensure some non-sensical state pops up. Why not leverage the typesystem to make non-sensical states impossible to represent in the first place?

Check out this post discussing type systems to help avoid mistakes.

Also, there's a good talk from the Elm community about making non-sensical states impossible to represent (link) - again reducing the need for tests.

Collapse
 
stealthmusic profile image
Jan Wedel • Edited

Yes, I absolutely agree that type systems helps a lot. I would actually not use a dynamically typed language for any critical system. This actually increases the number of tests and his right to an unbearable amount. I did that once...

Actually, I’m also a big fan of state machines. I am an electrical engineer and I was working for years in embedded projects. I actually replaced a lot of code that had bugs frequently to use proper state machines. However, because we use Java I needed to write at least a couple of tests.
So, there is always some nice language that may improves your code or reduces the number of tests you need to write. I actually like Haskell and Erlang for different reasons very much. But I would not use them for professional projects because I would not be able to hire any developers in my team. For my professional life I need to stick with Java which is at least not dynamically typed although there is a lot to be improved...

Collapse
 
kylegalbraith profile image
Kyle Galbraith

Nice write up Jan. I think the importance of automated testing cannot be stressed enough. You briefly mention it here, but it is worth pointing folks to the the testing pyramid. I see a lot of development teams putting a ton of effort into the top of the pyramid. This is expensive and often times slow. Focusing on the bottom of the pyramid and building out fast and coherent unit tests can really accelerate development.

Collapse
 
stealthmusic profile image
Jan Wedel

Hey Kyle,

thanks for your feedback. This is interesting. I actually decided not to look up any existing literature and I haven't read this article of Martin. But I completely agree with the idea of the testing pyramid. Maybe I will add this link and put some more emphasis on it!

Collapse
 
alainvanhout profile image
Alain Van Hout

I think API tests might also deserve a place on that list. I count those as separate from (typical) integration tests because API tests can (i.e. its possible to) be executed against a regular running version of your (web) application, while integration testing generally involves setup and bootstrapping, and tends to be quite resource intensive.

Collapse
 
stealthmusic profile image
Jan Wedel

Yep, as stated it’s not a comprehensive list. Personally, we use integration for the APIs as well. In fact we implement them test-driven as well and we use Spring REST Docs to create API documentation from our tests.

Collapse
 
revskill10 profile image
Truong Hoang Dung

The actual reason for people bothering with testing is not the quality of code or design, but it's the ability to refactor the design. I mean ability to change existing system without breaking it.

If you don't need to refactor your code, test is not needed.

Collapse
 
stealthmusic profile image
Jan Wedel

Hi!

That might be your reason but it’s definitely not all people’s reason, I can assure you.
The improved design certainly also help maintainability but if you wouldn’t do testing and/or test driven design, you would first see your code in action in production. And you’ll find all the errors there which is very embarrassing for every professional Software Developer, at least that’s my point of view.

Collapse
 
scofieldidehen profile image
Scofield Idehen

awesome

Collapse
 
ben profile image
Ben Halpern

Cover picture is stolen from this timeless

Steal away!