If you are involved in startups or scale-ups long enough, thousand pages of test strategy and million lines of code of
regression e2e test suite won't cut it 😴.
Modern quality engineering thinking suggests that the percentage of testing done at the earliest and lowest level (unit tests) should be significantly higher than integration and e2e tests. Industry-standard says it should be around 70% of unit tests while the rest of the tests should share around 30% of coverage 🤔. IMHO, it depends! Without understanding your company's current landscape, then there is no silver bullet or "one-fit-for-all" approach. What Quality gurus are preaching nowadays: there are no best practices but only good practices; Good practices in the context of the company's current and future structure, skillset, tech stacks.
Usually, if you are someone new to the company and you were given the golden hand to resolve the pain points, you get excited to be the white knight to save the day (maybe I was just talking about myself here 😝). There is nothing wrong with that as that is human nature but, most of the time, it's not as straight-forward as it seems.
Focusing too much on numbers / quota sways a resource to think that reaching the agreed threshold is good enough even if those tests don't add value. Furthermore, focusing on tooling / framework leans toward just following the requirements / happy path flows, without doing further analysis on what are the other possible scenarios (e.g. edge or negative cases).
In any case, after reviewing my current company's needs, I came up my own version of the test pyramid or on what I see as more of an umbrella with sprinkles of rain water hovering above it (my editing skills jus't cant find time to draw an umbrella-looking diagram). Regardless of what kind of figure / diagram you make out of it, this is simply our long-term continuous test strategy - which is appropriate enough for a lean + agile company like ours. This will eventually evolve as the company grows and structures change ♻️.
In my view, percentage of test coverage in each stage matters. However, what matters more is the time spent on two zones I have identified: Spending maximum amount of time on PROACTIVE ZONE while spending the minimum amount of time on the REACTIVE ZONE.
The ability to plan on delivering quality feature / product has been lost with Agile's thinking of delivering fast.
At the same time, as the role of testers are slowly evolving into being the caretaker of the quality of the whole delivery lifecycle, they (or even your developers who loves testing 😏#sarcasm) should be involved in all levels of this diagram. As you can see, testing is continuous and the time spent on quality as early as possible, the better (I can't emphasise how shift-left mindset sets the culture of delivery quality products! 🕶️).
Just to reiterate, the test strategy should evolve based on the changing needs of the company. It shouldn't be fixed on a simple test pyramid. Furthermore, what works for your old company doesn't mean it would work for your new one (my diagram had multiple iterations too). It's worth mentioning that it depends on the products / service offerings too. For example, if your company is offering API and web hook integrations then understandably, there should be more integration and contract tests involved, if your company is offering retail catalogues then there should be more visual or UI tests involved, and so on and so forth.
In hopefully my next article, I will try to deep dive into those test frameworks.