DEV Community

FabriceMk
FabriceMk

Posted on

Guidelines to build good RESTful APIs - Part 3: Testing Phase

This current post is the 3rd part of a 4 articles series about our Guidelines to build good RESTful APIs.

You can access the other articles from the following links:

🧐 "Does it even work?"

Testing is an important step in our development lifecycle and we have different types of tests in every project. They all have their specificities so we try to have a balanced mix of all of them.

You may have seen this pyramid about testing:

Testing Pyramid

Most of the time we follow this pattern with a lot of automated unit tests. As we move into integration and E2E tests the amount of tests decreases as it becomes more and more expensive to write and run them.

The priority of the tests depends on your project. For example when we inherit from an existing API with no tests at all, we prefer to focus on E2E and integration first as they are better to detect regressions on the "product behavior".

βš™οΈ Unit Testing

Reference: Wikipedia article about Unit Testing

In our team, unit tests are mandatory when building new web APIs. They are written and maintained by the developers.

They are the closest to the code and are tightly coupled with the language or framework you are using. As such they need to be also treated as code:

  • Aim for good quality. Avoid silly logic, name your variables properly, run linters on unit tests code!

  • Learn how to properly use the testing frameworks at your disposition. Testing frameworks are usually powerful but can be very complex at the same time. You should spend the appropriate amount of time to to feel comfortable with them.

  • Make good use of support helpers like mocking/stubbing or assertions libraries, efficient runners... They can dramatically help to have more readable or simpler tests.

  • They should be reviewed!

One note though: we noticed that trying to get rid of all copy-paste and have reusable functions for absolutely everything like the application code could lead to hard-to-read tests. That's why we often have specific coding rules (and linter rules) for unit tests that are a bit more relaxed. It's up to you and your team to find the balance that works well.

Pros Cons
Find some problems early in the development phase (especially bugs and implementation flaws) Cannot detect broader errors (created by the interactions of several components together)
Fast to execute Can give a false sense of security if you rely only on them and on metrics like coverage (100% coverage =/= bug-free)
Helps a lot to be more confident during refactorings Need to find a good balance between reusability and readability in the tests codebase
Can act as some type of documentation
Are automated by nature

🧩 Integration and End-to-End Testing

Those tests are mostly under the responsibility of our dedicated QA team. They test bigger chunks of the codebase/product at the same time and have higher value when we test the overall product.

The QA team usually comes up with test scenarios once the products specs are done. The developers then review those scenarios and give feedback to QA engineers from a different point of view. It also helps the developers to better understand the test suites and anticipate issues when they change some parts of the implementation.

Pros Cons
Cover test scenarios closer to real use-cases and product specs Take more time to implement and automate
Can be compared directly to specs Slower to execute
High-value to detect product regressions Don't cover everything and focus only on the main flows
Can also act as some type of documentation

🏎 Performance Testing

The APIs our team builds are mainly consumed by mobile applications and may have to serve up to several tens of millions requests per day. Some of those APIs also have to deliver important content for the applications to work properly: API performance needs to be tested.

Before the initial release of an API, we benchmark our different endpoints (with the help of specialized tools like Locust, Gatling or k6) in order to:

  • Measure the max capacity of our systems. How many requests/clients we can support and keep an "acceptable" response time.

  • Detect which components are the bottlenecks. Is the database slowing down everything? Are the connections pools properly configured? Is the Load Balancer big enough? etc... They may require immediate fixing/mitigation.

  • Study the scalability of the different components. How much more traffic we can sustain if we double the memory of that component? How are the infrastructure costs increasing with the traffic scale?

Those numbers are precious to evaluate the cost of the system at several scales, guarantee the response time for a specific number of clients etc...

Performance testing is not straightforward and requires a strict test protocol. It can also be time consuming. Not all APIs need it but ask yourself those questions:

"If our product becomes popular, will our systems scale? How easy and fast will it be? How big is the growth/financial/reputation impact if the system cannot sustain the traffic?".

If you work on such sensitive APIs, prepare and run Performance Testing before the initial release. Ideally, you want to monitor that performance continuously and make sure the performance doesn't degrade over time.

It's a big topic on its own but here are some resources that could help you:

πŸ’ͺ Reliability Testing

Reliability Testing aims to confirm how your system reacts when one or several components face different levels of degradation. As for Performance Testing, not every API needs it. But if your API is an important piece for the continuity of service of your business, you need to think about reliability testing.

During the design phase, the architect or the developers may introduce some redundancy like:

  • Having several instances of the web API so if one becomes unhealthy, the service is not totally down.

  • Having multiple database nodes like a Primary-Secondary failover setup, a triple-node cluster etc...

  • Deploy the service on multiple datacenters/regions/cloud providers to cope with geo-localized issues (regional disaster, power outage, network issue etc...)

That's very good! But in reality, things are rarely so simple:

  • Maybe you didn't configure the services properly to take advantage of that redundancy (some databases require that the code handle some aspects of the failover).

  • Maybe your supposedly "optional" cache layer breaks your entire application if it becomes unavailable temporarily.

What seems nice on paper needs to be tested in real conditions. You have to make sure all the mechanisms you put in place are working as expected in case of emergency. Similar to this famous quote about backups:

There’s No Point in Backing Up Your Data if You Don’t Test Your Backup.

You want to know if your API continues to work after the failure of some components.

How many requests will be dropped during the failover? Is it acceptable?

The testing is very specific to each project but try to figure out scenarios you want to confirm depending on the quality of service you need to guarantee.

πŸ” Security testing

At Rakuten we are lucky to have teams dedicated to Cyber Security. They help us to test the software we build. For web APIs, it often involves Penetration Testing by using:

  • Source code scanning to detect vulnerable software versions.

  • Port scanning to detect vulnerabilities on the network stack.

  • SQL, XSS injections.

They can also help at the design phase to make sure you don't introduce fundamental business flaws that would require a major revamp later.

So problem solved? Not exactly.

Having dedicated security engineers is nice but software security should involve EVERYONE developing the product: the product specs, the developers, the QA and also the DevOps in charge of setting up the infrastructure.

The involvement of the different roles is important to apply the concept called Defense In Depth where multiple layers of security are in place to minimize the chances of attacks.

Defense in Depth

For every new API our team builds:

  • We go through a Security Audit.

  • And we make sure that all issues are fixed before release.

The ideal situation would be also to have some automated security diagnostics running against your APIs as part of the CI/CD pipeline and detect more issues by yourself as early as possible.

⏩ Next part

After testing you now need to deploy your API and make sure it runs properly at all time. Let's talk about operations in the final article.

Part 4: Ops, Ops, Ops! and Final Word

Top comments (0)