I do not pretend to be the ultimate truth.
So, first of all let's remember what pyramid tells us. The higher we go - the slower the tests run. The higher we go - the cost of one test is higher. So as a result, the size of the pyramid part - describes the number of tests we should have on each level.
Not all of us remember, what is the meaning of the integration tests. Some interpretations of testing pyramid splits integration level onto Component, System and API, I’m not fully agree with it, but only because of some misunderstandings of such a model. So, integration could be between some components in code, between all components in code of one part of the system, between some parts of the system (backend and DB) and so on. So when I talk about integration tests, I like to clarify - integration between which parts of the system we are testing.
I want to use micro-services on backend and standalone frontend as an example architecture for test levels description. From my point of view, it is quite popular architecture nowadays. The main headache of it for automators is numerous services which can be written using different languages.
Level of tests:
- Unit testing of backend and frontend - I think here we need no additional information - it is a well-known for all of us level
- Integration tests:
- Isolated testing of each service, with mocking requests to other services (e.g. Yandex Testsuite). Here we should test logic of the current service - If some service needs another one - we could put it to the API testing, but it is possible to mock all requests from your service. Testing of corner cases and backend data validation should take a place here.
- Isolated testing of frontend (e.g. Playwright, Cypress, Puppeteer). We should cover here all possible UI tests that do not test business logic of the server. For example: forms validation, error handling, data displaying and so on. As much as you can test here - you should test here, don't rely on E2E - they cost much more then this.
- API (Integration) testing on all number of backend services. This will test integration between all services and full-chain of business logic that we could test using API.
- E2E tests. Here we should test as less as possible - most of business logic should be verified above. Here we should test the last part of integration between all components of the system. I prefer to use a graph-based (or mindmap if you like) approach of writing such tests - you can use user stories to create a graph and then create BDD-like long tests that will cover one graph path per test.
- Integration tests on each level are much faster than E2E tests
- Testing frontend with mocked backend is much easier, because you do not need to setup backend for some situations on UI (e.g. HTTP errors and some related content)
- You write less high-level tests - you increase your productivity and decrease the cost of the final app
- You have a zoo of tools and tests - each type of tests might be written in different way
- It is hard to mock backend requests to another services
- Setting up all infrastructure and testing frameworks costs higher
- Sometimes it’s hard to determine on what level test should be placed (E2E or integration)
This architecture makes some things easier, but it would be useful for big projects with big number of tests (we had 4K+ automated tests excluding unit-tests). Not too much people like to write tests on different languages for different parts of the system, but if it will save your time in future - why not?