Some things you may not know about me: I worked alongside network engineers and cybersecurity professionals for a while before deciding that Software Engineering was the technical discipline that I was drawn to.
When I went back to school to "make it official" (even though I've been a software engineer for years) I picked Software Engineering and Security as my major, because I hadn't quite dropped that interest in security.
It never came up in real life until I joined my current team, where my actual title is Senior Product Security Engineer. It was super cool to finally get to apply things from my education to my day to day work.
At this point, I have only a capstone course left in my degree, so I know about as much information as a baccalaureate software security program has to offer. That by no means makes me an expert. EVEN SO, I know enough to have seen some atrociously bad practices from a security standpoint and I think most of them boil down to a lack of awareness. So in this post, I will cover some basic terminology for background, and then give you a handful of astonishingly simple things you can do to keep every project secure.
How do we even go about determining that software is vulnerable to something? Fortunately for us, it would be ridiculous to start from zero, and we don't have to. There are several institutions that track software defects that can be exploited by attackers.
MITRE is what I consider to be the most comprehensive, although maybe that is my own bias. MITRE has been around forever and they are a definitive source on software defects with implications for security. They don't track all software defects - if your website looks crappy on mobile, that isn't going to make it up to MITRE. Their business is software defects that can be exploited from a security perspective.
While we are at it, let's define some basic MITRE terms and software defect terms.
- Weaknesses: A weakness is the most basic form of software defect with security implications. MITRE tracks weaknesses with the Common Weakness Enumeration (CWE), and specific weaknesses are given scores based on severity. The scoring rubric used to do this is called the Common Weakness Scoring System (CWSS). Weaknesses are given an identification number so that people can track their status, like CWE-326 as one example
- Vulnerabilities: Vulnerabilities are where the rubber hits the road and someone manages to exploit a weakness for some purpose. These are tracked through the Common Vulnerability Enumeration. Specific vulnerabilities are called "CVEs", Common Vulnerabilities & Exposures. The impact of a CVE is determined with a scoring rubric called the Common Vulnerability Scoring System (CVSS)
- Platforms: MITRE also used to maintain the Common Platform Enumeration (CPE) although the National Institute of Standards and Technology (NIST) now heads up that effort. (A way of looking at CPE vs CVE might be, If Red Hat Enterprise Linux had a vulnerability, it would be tracked as a CPE. If a specific ruby gem had a vulnerability, it would be tracked as a CVE)
Holy acronyms Batman! But yeah, those are the building blocks of the software security world. A quick word about scoring systems: If you are curious how software vulnerabilities get impact ratings, it basically follows the line of thought you'd have as a detective solving a crime:
- Means: how accessible are the tools needed to pull of this attack? Do you have to be on a specific server or network, or is the vulnerability in a popular package shipped across millions of websites?
- Motive: how attractive is the outcome of the attack to an attacker? Would they mildly disrupt a website, or be able to steal millions of dollars or a treasure trove of personal information they can resell?
- Opportunity: The longer a vulnerability is around, the more defenses can be mounted against it. We discover vulnerabilites from some person that mounts the first attack. It gets harder to carry out the more awareness we have of an attack type. Opportunity can also mean things like having partial privileged access, like a disgruntled ex employee who may still have credentials to some system.
What I'm about to describe for you should be mostly within reach for software engineers of pretty much any level, although as a junior engineer I do remember it took me a minute to set up my first pre-commit hook.
Sometimes the simplest things have the biggest impact. Here is my checklist template for security for any new project:
- Start with the latest and greatest versions of everything: If I'm going to build a new Vue/NodeJS app, I'm going to use the latest version of everything. Often version bumps contain security fixes to known issues
- Tie static code analysis to a pre-commit hook or CI process: Static code analysis programs analyze your non-running code for common security issues. My preference is to handle this in pre-commit hooks but do whatever works for your team
- Automate package audits: Again, doesn't really matter how you want to set this up but whatever ecosystem you are in, the package manager should have an audit feature to reveal vulnerabilities. An example of NPM's version of this is below
- Automate checks for unused packages: This needs to happen regularly. There is a concept in software security called attack surface, or number of places attacks can occur. The more code you have, the more dependencies you have, the bigger your attack surface. Checks for unused packages should happen at regular intervals to free you of dependencies you aren't using.
I'll also share with you that even if you aren't a "security person", you likely have a role to play in security:
- Frontend engineers: You have a huge role in keeping software secure. SQL injection, or inserting malicious commands through form data is a very old attack but has not gone anywhere.Input validation on forms is an important part of preventing SQL injection. The frontend can be a vulnerable area for brute force attacks. It is trivial to create a bot to sit on a form page and cycle through the most common usernames and passwords until it hits a winner. So in addition to input validation, rate limiting has implications for the frontend domain (should a login be disabled if too many failed attempts are made? etc) and is critically important. With the popularization of client-side routing, our frontend folks are also sometimes responsible for implementing access control and restricted routing logic
- Backend engineers, you also have an important role in keeping software safe: Modern object relational mapping systems should be pretty good at stopping SQL injection, although not everyone uses them. I've seen a no-ORM API where logic for dynamic input was concatenated together, basically a SQL injection dream. I've seen backends built in Python2 in 2020, the year it was meant to be sunsetted. This is an inherently risky practice, as the language maintainers said they would not update Python2 even if critical vulnerabilities were found. API rate limiting can also be important in preventing Denial of Service attacks where someone overloads and crashes your system. Any backend that uses commands that operate on the filesystem or run dynamic commands against a server can be areas of concern and need to be handled very delicately.
Even if you aren't a security person, it is never going to hurt you to have basic knowledge of software security measures. I hope this post was able to provide a high level view of some of the considerations to keep in mind when building software. If you have other things you prioritize for your team's security I'd love to hear them in the comments.