Skip to content
loading...

re: Test Driven Development - not Unit Test Driven Development VIEW POST

FULL DISCUSSION
 

I think if you can write an Integration test or end-to-end test still go through the red->green->refactor cycle in minutes (rather than hours), you are getting a lot of the benefit of TDD, but higher level tests tend to cover a much broader scope of functionality, so switching from red->green is probably going to require a much larger amount of implementation, which will hurt the speed of your feedback.

This is (IMO) the principal reason all the original TDD was done at the unit level: it's much easier to (a) focus on one single aspect of behaviour per test and (b) drive the design of the code. This is very much covered by your statement around "less granular feedback of test cases and risk of over-engineering a solution".

If I were at the point where all the fundamental pieces of my solution were in place and I was now wiring them up to build new pages or page flows (for example), I could imagine switching from primarily TDD at the unit level to driving the functionality via higher level test. By that time, I think the design of the software is probably settling down and the cycle time would be quite short. On the other hand, if I were just starting on build out my first page/page flow with all associated responsibilities (e.g., validation, error handling, auth'n and auth'z, etc) then I would most certainly drive this via unit tests.

 

but higher level tests tend to cover a much broader scope of functionality, so switching from red->green is probably going to require a much larger amount of implementation, which will hurt the speed of your feedback.

This is so important. Well put. ๐Ÿ™

 

Fair points and fully agree with idea that when starting with something that I know will be quite a big piece of software with a lot of concerns - will opt in for unit level tests to help design separation of concerns and slowly build up the design or blueprint of my solution.

I think will need to come up with a series of examples and try writing different tests to see what works best in certain scenarios.

My main example for integration/e2e tests is Lambda because it usually should be small enough that having unit, integration and e2e tests seems like an overkill.

And often see a lot of tests in a codebase where behaviour is... tested too much? Thatโ€™s why starting to question certain beliefs I hold.

Thanks for feedback, itโ€™s something I want to write a little more about and need wider input/challenging opinions :)

 

Testing Lambda functions is an interesting world because if they truly are only a single function (doing a single thing), then I agree that unit + integration (with the Lambda runtime plus upstream and downstream dependencies) + e2e (presumably with a bigger set of AWS resources) could be overkill. My experience has been that unit testing still provides a lot of value if you can neatly separate the AWS/Lambda integration, so the function can be unit tested without the need to do much mocking of the execution environment. Most of the Lambda functions I've worked with have been less single functions and more like small services themselves, closer to the original idea of what a _micro_service should be.

There is a natural increase in testing complexity as you build complex distributed solutions and I only use TDD to drive the low level design of the individual components. I wont usually attempt to test-driven a Lambda being triggered as a the result of an S3 event, for example. In most cases, I will have built the Lambda beforehand anyway.

You also mention the maintenance legacy of large automated test suites. This is very real and I've seen entire suites be discarded because teams didn't know how to refactor them to provide a more efficient source of truth for the codebase.

I have a hypothetical question:

Letโ€™s say you work on a micro service/lambda. You start with TDD and slowly work your way up to higher level tests and application.

As you navigate higher do you end up with all the unit tests you have written throughout or at some point some become obsolete due to integration tests?

My example I have written previously is:

  1. letโ€™s say an endpoint needs to fetch data from other service
  2. I write a test and function that will make http request and give data back
  3. I write test for endpoint that modifies that data from that function above
  4. I write implementation

Depending on how I write the test in step 3, I either mock function from step 1, or HTTP layer again same as in unit test for data function. With latter the unit tests feel like it is obsolete. Yes you probably are losing more granular feedback but then test is better? Plus if I change the function/module in step 1 but data remains same - I donโ€™t have to change tests anymore.

I am still trying to narrow down my exact hypothesis ๐Ÿ˜€

Hi Vlad,

You are absolutely correct in that - if you are writing lower level unit tests and then higher level tests - there will be substantial overlap between the two layers. This is the same for applying automated testing to any codebase if you are starting with unit tests.

Are these unit tests obsolete? Maybe.

Are they redundant? Probably.

Could you refactor the unit tests to remove some of the redundancy? Definitely.

Should you delete them? Almost definitely not :-)

As you said the "granular feedback" of the unit test suite is the main reason I keep these tests around. The secondary reason would be the ability to exert more exact control around the error conditions that might be harder to setup in higher level tests.

In an ideal world, a regression in behaviour in an application should only break a single test. Practically, a single regression might break a couple of unit tests and a corresponding higher level test. This is annoying, but I've not found a good way to get the fast feedback from unit tests (and especially TDD) AND the higher confidence from higher level tests AND the lack of redundancy when they are combined in a single codebase.

I've seen quite a few teams stuck with many thousands of unit tests that were an absolute drag on their ability to work on the codebase and with no way to redesign the test suite to allow them to spend more time building code and less time repairing the suite. Test suite design and maintenance is certainly not a solved problem within the industry.

Thanks for your thorough replies Andy - appreciate your time and wisdom :)

I think the last paragraph captures what bothers me the most - when the tests start taking up more time than they should (wonder if there is a measure/metric? everything is 80/20?
๐Ÿ˜‚)

I will think about it a little longer and may come back to this conversation in the future ๐Ÿ™‚

code of conduct - report abuse