DEV Community

Discussion on: If you were tasked to conduct a security audit on a server/database-backed web app, where would you start?

Collapse
 
vlaja profile image
Vlatko Vlahek

I would personally start by auditing server and database access, before delving into the code of the app and database queries themselves. This means I would personally first audit ports and used user accounts for services and the database itself. This can give big security advantages in a relatively low amount of time.

Some questions to ask yourself here:

  • Is the access restricted enough by having only required ports open to the public?
  • Is this true both for the server, and some incoming access control ruleset (on AWS, Amazon, DigitalOcean ...)?
  • Do the web server, app, and database run on specific user accounts which are restricted to only what they need in order to run properly (in terms of disk access on the machine, running specific services etc.)?

These changes will largely mitigate potential infrastructure breaches. Also, if you need development access to the database (I wouldn't recommend for production, but sometimes it can't be avoided), never accept the case of "easy access" which will add security vulnerabilities. Enforce certificate-based VPN + SSH tunneling to a local port on the developer's machine.

After this case is covered, I would focus on determining who has access to the DB and for what reason?

A part of the problem is GDPR related, but even if remove that from the equation, does everybody needs to have direct access to the database? If so, why. If it is for running "hacks" or changing something that app dashboard currently doesn't support, stop that practice and ensure that crucial things can be done from the application side.

Also, do a review of the load balancer and web server security rules. Disable unused protocols which don't bring in any value. A lot of attack types can be stopped at this level.

After the security of access is resolved, I would focus on more codebase stuff, like ensuring that there are no user accounts or connection strings password in the code, as these can be resolved by using local environment variables on the host machines. I have seen this more times then I would personally like, and nobody can guarantee that your code will never leak, by your own developers or the git repo. Better safe than sorry.

If you are good on all the mentioned fronts, continue testing the app to most common attack types (like SQL injection and XSS), and especially focus on testing API responses, with and without authentication. Try to see what data can you get from the app, and try to get data for other users and resources that you don't own inside the app logic. Also, try to determine how long do the authentication tokens really last, and whether this makes sense for the app logic.

Collapse
 
bernhardbln profile image
Bernhard Streit

I'm a bit confused about what you mean with credentials and environment variables. Totally agree credentials should not be part of the code, but isn't it a common practice to have them injected into the target container/machine via environment variables, and to prohibit any login to the container/machine?

Collapse
 
vlaja profile image
Vlatko Vlahek

If you are using a CI/CD pipeline, it is definitively preferred to inject something like this from an encrypted env variable on the CI/CD system and not save anything on the host machine. It has the benefit of added security.

However, for a lot of smaller companies, the case of having a CI/CD pipeline is not always the case. I have seen a lot of admins deploying the app manually via SSH or RDP, by copying files or whatnot. Of course, while this is generally not acceptable for serious systems, we can't run from the fact it happens, especially for teams that are not as experienced in developing larger systems, or simply don't have any infrastructure experience.

I come from a .NET background, where there are a few solutions even for such cases:

  • Secrets manager
  • appsettings.{environment}.json files

The issue with this approach is that these files are not encrypted, so infrastructure breach will compromise the app. But, still, it's better than committing to a git repo.