Testing Lambda functions is an interesting world because if they truly are only a single function (doing a single thing), then I agree that unit + integration (with the Lambda runtime plus upstream and downstream dependencies) + e2e (presumably with a bigger set of AWS resources) could be overkill. My experience has been that unit testing still provides a lot of value if you can neatly separate the AWS/Lambda integration, so the function can be unit tested without the need to do much mocking of the execution environment. Most of the Lambda functions I've worked with have been less single functions and more like small services themselves, closer to the original idea of what a _micro_service should be.
There is a natural increase in testing complexity as you build complex distributed solutions and I only use TDD to drive the low level design of the individual components. I wont usually attempt to test-driven a Lambda being triggered as a the result of an S3 event, for example. In most cases, I will have built the Lambda beforehand anyway.
You also mention the maintenance legacy of large automated test suites. This is very real and I've seen entire suites be discarded because teams didn't know how to refactor them to provide a more efficient source of truth for the codebase.
I have a hypothetical question:
Let’s say you work on a micro service/lambda. You start with TDD and slowly work your way up to higher level tests and application.
As you navigate higher do you end up with all the unit tests you have written throughout or at some point some become obsolete due to integration tests?
My example I have written previously is:
Depending on how I write the test in step 3, I either mock function from step 1, or HTTP layer again same as in unit test for data function. With latter the unit tests feel like it is obsolete. Yes you probably are losing more granular feedback but then test is better? Plus if I change the function/module in step 1 but data remains same - I don’t have to change tests anymore.
I am still trying to narrow down my exact hypothesis 😀
You are absolutely correct in that - if you are writing lower level unit tests and then higher level tests - there will be substantial overlap between the two layers. This is the same for applying automated testing to any codebase if you are starting with unit tests.
Are these unit tests obsolete? Maybe.
Are they redundant? Probably.
Could you refactor the unit tests to remove some of the redundancy? Definitely.
Should you delete them? Almost definitely not :-)
As you said the "granular feedback" of the unit test suite is the main reason I keep these tests around. The secondary reason would be the ability to exert more exact control around the error conditions that might be harder to setup in higher level tests.
In an ideal world, a regression in behaviour in an application should only break a single test. Practically, a single regression might break a couple of unit tests and a corresponding higher level test. This is annoying, but I've not found a good way to get the fast feedback from unit tests (and especially TDD) AND the higher confidence from higher level tests AND the lack of redundancy when they are combined in a single codebase.
I've seen quite a few teams stuck with many thousands of unit tests that were an absolute drag on their ability to work on the codebase and with no way to redesign the test suite to allow them to spend more time building code and less time repairing the suite. Test suite design and maintenance is certainly not a solved problem within the industry.
Thanks for your thorough replies Andy - appreciate your time and wisdom :)
I think the last paragraph captures what bothers me the most - when the tests start taking up more time than they should (wonder if there is a measure/metric? everything is 80/20?
I will think about it a little longer and may come back to this conversation in the future 🙂
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.