Based on my experiences for a lot a of people there’s a big chaos in their head about software testing, sometimes even for programmers and project managers. That’s why I decided to write a post which is giving a rough overview about the different ways of software testing.
As I started my carrier I was working for a small start-up. There software testing meant, that someone is starting the software, does some random clicks and checks if something strange happens or not. Later on I changed to automotive business. In this field testing is really strict. But all what I learnt: the software itself determines the right testing process. It is up to that: how safety critical is your software, who will use it, how complex is it etc.
There are several ways of testing, these methods can be either automated or manual, functional and non-functional, black box, grey box or white box and it can be on several different levels (unit testing, component testing etc.). For each project you need to find the most fitting way of testing. Of course more methods you are using at testing there’s a lower chance to keep bug in the software (and this chance will never reach zero), on the other hand of course testing takes time, so it costs money as well. That’s why you need to find the most fitting way to your project. Now I will try to introduce you to the different ways of testing.
A test can be either functional or non-functional. Functional test is usually against the functional requirements. Here you are basically checking if the functionality of your software is the same as the requested behaviour. You need always take care about corner cases as well.
Non-functional testing is going against the non-functional software requirements, it can go against the stability of the software, it can test if the software is running under each requested environment (operation system, web browser, different hardware etc.). Security test belongs to this category as well which is checking if your software is secure enough, it can be especially important in case of web based applications. Load test belongs here as well, which is testing how your software is reacting in case of large number of requests. Here you can test also the runtime of your software in the most critical scenarios (profiling) or the memory consumption. It can be also a usability test, where you are testing with the target users if your software is user friendly enough.
Automated tests are normally so called “coded” tests. They are implemented once and can be run anytime later. Classical example is the unit testing of your code with some test frameworks (GTest, JUnit etc.). Other automated tests can be implemented with some testing frameworks (like Selenium) or with script languages (Python is also popular in this field). The common point here is that you are providing some inputs to your program, or to a piece of your program, then you are letting your program run and finally you are comparing the results with the so called expected results. The automated tests should be deterministic. That means with the same peace of code they should always pass or always fail. The big advantage of automated testing is that you can run such tests after each change of your code base and you can check if the changes were not breaking your already working functionality. For this reason you can integrate them to the continuous integration system, which is running your tests after each new commit once.
Automated tests can also be implemented parallel with your code, so you don’t need to wait with the test development for an already done software.
Manual tests are tests which can be done manually. So you are manually starting the program, doing some predefined steps and checking its behaviour. It is good to do such tests for functionality that you don’t want to test frequently or where it is difficult to test it in an automated way. What is important here: manual tests also needs to be done based on a test plan what is describing the different tests scenarios with exact test steps and expected behaviour.
In case of a black box test you have totally no knowledge about the code. You are just like a simple user, you are trying out the different functionality of the software, without considering its implementation details. For black box test is good to have a tester, who has really know knowledge about the implementation details of the software (so who didn’t take part in its development). Black box tests are usually done against the software requirements, which also needs to be well documented to be able to define good test cases. In case of a black box test you also need to think like a user. A black box test can be both automated or manual test.
In case of a white box test you have access to the whole code base and you are considering it at your test case definition as well. For example you know what are the exact parameters of your function and you know, that in case of a negative parameter value your function has to throw an exception then you should define such a test case. The most typical white box tests are unit tests, but you can do white box tests on different levels as well. It can be both automated and manual test. So that also counts as a white box test if you as the developer of your code you are trying out with a debugger how your code is behaving in case of specific inputs. One important counter at white box testing is called code coverage: it means how big percet of the lines of your code are really running when you are running all your tests.
In case of grey test you have limited knowledge about the code. For example you know what are the components, which interfaces are used, what are the parameters of the interfaces, but you have no idea about that how the component is implemented.
There are different levels of testing from testing a small piece of code to testing the whole system. Each levels has their own advantages and disadvantages. Normally the advantage of lower level tests is that they can run faster and can show the root cause of the bug more precisely. On the other hand there are bugs which can be detected only by higher level tests, because they are caused by the wrong cooperation of different software components. But in case of higher level tests it is difficult to tell at a failing test case where is the root cause of the bug and normally they are running longer.
Unit test is the lowest level of testing. Unit test is testing one unit. A unit should be small enough, most of the cases it is a function or a class. In case of unit testing you are really testing this small peace of code. To avoid to be influenced by other units other units can be mocked. That means you are changing them with code peaces with some fake/dummy behaviour. For such reason there are several mock frameworks, like Mockito or GMock. To be able to mock you need to have a clear and modulable code architecture. One common methodology to make mocking easy is called dependency injection.
A good unit test is small, fast, independent and testing one small piece of the code. Usually unit tests are automated but they can also be manual.
One more important method is called Test based development, where you are basically implementing your unit test cases parallel with your code.
Unit testing is usually done by the developer of the code. On long term it is useful to find regressions.
A good unit test is showing the exact place of the bug in case of failure, so that no long debugging is needed to fix your code.
Component test is about testing a component of your code. If your code is modular enough and it is organised into components with clear interfaces you can test these interfaces separately as well. This is a good candidate for grey box test. So you know what are the interfaces to be used (for example in case of a library) and you know what you are expecting from these interfaces, but you don’t know what is their exact implementation.
Integration test are for testing how several components are working together with each other. Integration test can also detect bugs which are cause by the incorrect cooperation/connection of the components.
System test is going against your whole system, so you software running on its target hardware in its target environment. These test are usually done as final tests, before the publication of the software. These should be done at each software release. This is usually a black box test, can be both manual or automated. This level of testing is usually done by an independent test team.
Regarding the question who should do the tests there are several different opinion. In case of classical testing approaches the rule is usually that the tester and the developer should be two different person. So if the developer was thinking in a wrong way or had a wrong understanding of the functionality the tester could still find the bugs. I think this is a good approach in case of higher level tests and usually in case of black box tests.
In case of modern testing approaches like test based development the tester and the developer is often the same person. This is passing well to the agile approach where all team member should be able to take over any kind of tasks (testing, development, design etc.). This approach is making the testing a bit faster, but based on my experiences it is also not so effective.
I think the most safe way is the aggregation of the two way: implementation of unit tests by the developer and testing on higher level with black box tests by an independent team. But in case of all development it needs to be considered if it worth to take so much time for testing. In case of safety critical system like automotive development of development of an airplane software of course it makes sense. But in case of a web page or a game mobile app maybe it is not so important.
In the classical development methodologies like waterfall of V-Model tests are coming clearly after the implementation. But the development of automated tests can be also done parallel with the development. At the places where such classical methodologies are followed the software is also done in a cyclic way. That means there are regular software releases and each released software should be tested. So until the development team is working on release N+1 the test team is testing the release N and is preparing automated tests for release N+1.
At agile development methodologies, like SCRUM or Kanban, test driven development is often followed. That means the unit tests and the implementation are being done parallel. What is important, that in case of agile development you can also do some higher level tests. You can handle it as a separate user story to implement such tests. Or it can also be that there’s a separate test team which is also working for example based on scrum and they are always testing the results of the development team some sprints later.
In case of both classical and agile approaches I think it is really important to integrate the automated tests into the CI system, so it is able to check regularly if the software is working properly. Of course in this case the tests needs to be also updated regularly up to the newly implemented requirements. Since to run all tests can take long it is good to setup your CI separately for the different levels of testing. Like run all unit tests in case of any new commit, run all component and integration tests every night and run all the tests at each software release.
I just realised during the writing that it is really a complex topic, so I think I couldn’t introduce everything detailed enough, but I hope I could give you a good overview. Since testing is done with different approaches and in different ways at different companies I’m almost sure that experienced tester would disagree with some of my points. It can happen, I’m also not a tester to be honest. I hope my article could help you.