DEV Community

Cover image for Demystifying STRIDE Threat Models
Peter Benjamin (they/them)
Peter Benjamin (they/them)

Posted on • Updated on

Demystifying STRIDE Threat Models

Table of Contents

Introduction

Software is eating the world. As a result, the repercussions of software failure is costly and, at times, can be catastrophic. This can be seen today in a wide variety of incidents, from data leak incidents caused by misconfigured AWS S3 buckets to Facebook data breach incidents due to lax API limitations to the Equifax incident due to the use of an old Apache Struts version with a known critical vulnerability.

Application Security advocates encourage developers and engineers to adopt security practices as early in the Software Development Life Cycle (SDLC) as possible 1. One such security practice is Threat Modeling.

In this article, I offer a high-level introduction to one methodology, called STRIDE, and in a future article, I will demonstrate this process using an existing open-source application as an example.

What is a Threat Model?

Here is the obligatory Wikipedia definition:

Threat modeling is a process by which potential threats, such as structural vulnerabilities, can be identified, enumerated, and prioritized – all from a hypothetical attacker’s point of view. The purpose of threat modeling is to provide defenders with a systematic analysis of the probable attacker’s profile, the most likely attack vectors, and the assets most desired by an attacker.

With that out of the way, the simplest explanation in English is this:

Threat Models are a systematic and structured way to identify and mitigate security risks in our software.

There are various ways and methodologies of doing threat models, one of which is a process popularized by Microsoft, called STRIDE.

What is STRIDE?

STRIDE is an acronym that stands for 6 categories of security risks: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privileges.

Each category of risk aims to address one aspect of security.

Let's dive into each of these categories.


Spoofing

Spoofing refers to the act of posing as someone else (i.e. spoofing a user) or claiming a false identity (i.e. spoofing a process).

This category is concerned with authenticity.

Examples:

  • One user spoofs the identify of another user by brute-forcing username/password credentials.
  • A malicious, phishing host is set up in an attempt to trick users into divulging their credentials.

You would typically mitigate these risks with proper authentication.


Tampering

Tampering refers to malicious modification of data or processes. Tampering may occur on data in transit, on data at rest, or on processes.

This category is concerned with integrity.

Examples:

You would typically mitigate these risks with:

  • Proper validation of users' inputs and proper encoding of outputs.
  • Use prepared SQL statements or stored procedures to mitigate SQL injections.
  • Integrate with security static code analysis tools to identify security bugs.
  • Integrate with composition analysis tools (e.g. snyk, npm audit, BlackDuck ...etc) to identify 3rd party libraries/dependencies with known security vulnerabilities.

Repudiation

Repudiation refers to the ability of denying that an action or an event has occurred.

This category is concerned with non-repudiation.

Examples:

  • A user denies performing a destructive action (e.g. deleting all records from a database).
  • Attackers commonly erase or truncate log files as a technique for hiding their tracks.
  • Administrators unable to determine if a container has started to behave suspiciously/erratically.

You would typically mitigate these risks with proper audit logging.


Information Disclosure

Information Disclosure refers to data leaks or data breaches. This could occur on data in transit, data at rest, or even to a process.

This category is concerned with confidentiality.

Examples:

  • A user is able to eavesdrop, sniff, or read traffic in clear-text.
  • A user is able to read data on disk in clear-text.
  • A user attacks an application protected by TLS but is able to steal x.509 (SSL/TLS certificate) decryption keys and other sensitive information. Yes, this happened.
  • A user is able to read sensitive data in a database.

You would typically mitigate these risks by:

  • Implementing proper encryption.
  • Avoiding self-signed certificates. Use a valid, trusted Certificate Authority (CA).

Denial of Service

Denial of Service refers to causing a service or a network resource to be unavailable to its intended users.

This category is concerned with availability.

Examples:

  • A user performs SYN flood attack.
  • The storage (i.e. disk, drive) becomes too full.
  • A Kubernetes dashboard is left exposed on the Internet, allowing anyone to deploy containers on your company's infrastructure to mine cryptocurrency and starve your legitimate applications of CPU. Yes, that happened too.

Mitigating this class of security risks is tricky because solutions are highly dependent on a lot of factors.

For the Kubernetes example, you would mitigate resource consumption with resource quotas.

For a storage example, you would mitigate this with proper log rotation and monitoring/alerting when disk is nearing capacity.


Elevation of Privileges

Elevation of Privileges refers to gaining access that one should not have.

This category is concerned with authorization.

Example:

  • A user takes advantage of a Buffer Overflow to gain root-level privileges on a system.
  • A user with limited to no permissions to Kubernetes can elevate their privileges by sending a specially crafted request to a container with the Kubernetes API server's TLS credentials. Yes, this was possible.

Mitigating these risks would require a few things:

  • Proper authorization mechanism (e.g. role-based access control).
  • Security static code analysis to ensure your code has little to no security bugs.
  • Compositional analysis (aka dependency checking/scanning), like snyk or npm audit, to ensure that you're not relying on known-vulnerable 3rd party dependencies.
  • Generally practicing least privilege principle, like running your web server as a non-root user.

Summary

So, STRIDE is a threat model methodology that should help you systematically examine and address gaps in the security posture of your applications.

In a future article, we'll take an application and go through this process so you can get a feel for how this works.

If you would like to propose an application for me to threat model next, feel free to drop suggestions in the comments below.

Additional Resources


Footnotes


Updates

  1. Add more mitigations against tampering.
  2. Add more mitigations against information disclosure.

Oldest comments (2)

Collapse
 
juliavii profile image
juliavii

Very nice article! Im very interested in hear about tools. I do not know others than Microsoft SDL or Owasp Dragon (which is still in development as far as I know). Thanks!

Collapse
 
udoytouhid profile image
udoy-touhid

thanks a lot for this article. your writing has been very smooth and easy to understand. please, do a STRIDE model on a real scenario/application (any moderately complex system will do), so that we can learn.

the webtrends link is not working, it has been deleted. alternately, this can be used web.archive.org/web/20151019112355....