DEV Community

Cover image for Adding risk prioritisation to your pipeline security
Gary Robinson
Gary Robinson

Posted on • Updated on

Adding risk prioritisation to your pipeline security

DevSecOps increases the number of issues found and the speed at which they’re to be dealt with. In reality, only a small number of issues will pose a massive risk to the business. Unfortunately, security tools only give part of the risk picture when they return an issue.

That’s why being able to pinpoint the really big issues, in real-time - automatically - helps you know the big risks are being covered. It also helps ensure the development team can focus on important issues and not lots of small bugs. Let’s look at this in more detail.

  • The problem with security issues
  • The implications for DevSecOps
  • How to apply risk-based testing to software security issues
  • Why the Uleska Platform uses the FAIR Institute methodology


The problem with security issues is they are not created equal. In any list of 1,000 security issues, there’ll be some that would put the company out of business if breached, as well as some that wouldn’t even raise an eyebrow.

This isn’t just a technical discussion, but a business discussion as well. Let’s take a look at an example. If we were to pretend we have found two SQL Injection issues in two of our systems, these SQL injections would be exactly the same from a technical standpoint. However, let’s look at the context of the systems:

System 1:

  • Internal application
  • Used for cleaning staff to order which rooms are going to be cleaned
  • About four users
  • No sensitive data

System 2:

  • External application
  • Used for processing financial and PII data core to the business
  • About four million users
  • Lots of sensitive financial and personal data

In this example, we have exactly the same technical SQL issue, which would likely be flagged by security tools in exactly the same way (usually high or critical). However, there’s no real business risk from system 1 while the business risk from system 2 could end the company.

If system 1 were breached, we’d likely never hear about it - and who would want to breach it? For system 2, this is similar to the TalkTalk hack from 2015, which resulted in large fines.

However, from a pure tooling point of view, both SQL issues would likely have been reported with the same severity, meaning security and development would have needed to handle both the same.


1. Release risk (new issues)

When new issues are found in the latest pipeline security scan, what is the impact or risk of those? Should we stop the build/release, or let it through? This is important for security policy decisions. If a few new issues are found but are very low risk, then we may be able to proceed with the release to get the new functionality out and apply fixes later when time allows.

However, if there are major issues with an application running sensitive data to millions, then we’d likely make a different decision. We also want to automate as much as possible here, given the scale of issues that are going to be reported every day through automated testing.

Processes (and tech) that stop the build/release based simply on the existence of any vulnerability, regardless of risk, end up being turned off or ignored as they are too noisy. There’s usually some security issues in the backlog, they’re known about and going to be addressed, but shouldn’t stop the release.

On the other hand, adding risk-based analysis of each new issue in the pipeline means better decisions can be made (or automated) for go/no-go.

That’s why making this decision as easy and as automated as possible is not only an advantage but is likely the only way it will succeed in DevSecOps.

2. Overall risk reduction (backlog issues)

Many companies will have thousands of backlog security issues on their lists, yet as we’ve examined they’re not equal in terms of risk. What a company, especially stakeholders, wishes to do, is to effectively and efficiently reduce that risk over time and be seen to do so.

Let’s say we have two issues in the backlog that each takes one day to fix. If one issue was classed as $1million in cyber risk and the other $1,000, then you fix the $1million risk first. If you expand this to the thousands of issues in the organisation, then applying risk metrics to them means you can burn down around the top 10% of the highest risk issues and dramatically reduce your overall risk.

Wrapping this risk prioritisation into DevSecOps which then scales into all teams becomes a very effective way to both reduce - and keep reducing - the risk to the business.


Around the world, many regulations are focusing on the risk of issues, as well as the technicality of them. For example, the NYDFS Cybersecurity Regulation focuses on regular risk assessments. NIST has also recently announced support for the FAIR Institute cyber value-at-risk methodology to determine which security issues pose the greatest risk to the business.

There are a few ways to apply risk-based testing to software security issues. We prefer the FAIR Institute (as used by NIST) methodology which combines:

  • The technical aspect of the issue - i.e. how easy is it to exploit, what effect does it have on availability, integrity and confidentiality? Think CVSS v3 aspects and you're effectively there.
  • The nature of the system containing the bug - i.e. is it online or internal, how many users does it have, what assumptions could you make on the skill of those users to breach the system?

  • The data contained in the system - are there millions of PII, financial or healthcare records? Think about how people only rob banks because that’s where the money is.

By combining these inputs about a system and its bugs and applying the FAIR Institute methodology, you can quickly and automatically get strong risk estimations of the impact of each bug. When you have this, you can apply thresholds during the build/release (i.e. if risk of the issue is below X, continue, if it’s higher, take some action to alert). Historically, you can simply order backlog issues based on the cyber value-at-risk number calculated.

As the FAIR methodology describes, issues are assigned a dollar value risk range to represent the cost to the business of the risk. This is a combination of the impact, if the issue was breached, and the likelihood of the breach occurring on the system in question.

That’s why our SQL Injection without any sensitive information and low number of internal users isn’t much of a risk, while millions of sensitive data items exposed to the internet is a bigger dollar risk.


The Uleska Platform has implemented the FAIR Institute methodology into its risk algorithms so that each issue returned from security tools can have risk analysis automatically applied. This means as soon as we have the issue, we can flag the dollar risk of the issue and pass this back to CI/CD systems for decision making, flag in our UI and reports and track historically for metrics.

To do this, the Uleska Platform extracts the CVSS and similar information from the security issues returned by the various security tools integrated and combines them with quick-to-configure descriptions of the projects.

For example, a new application or project can set the range of expected users - whether it’s tens, hundreds or millions - the nature of data processed (including financial, healthcare and personal data) and some items on the project environment.

With this, the FAIR methodology is quickly applied so each issue can have a dollar risk associated. When that issue is fixed, the project shows the risk improvement. Finally, when decisions are being made about what features and security fixes are to be scheduled into the next sprint, the dollar risk can help make the case for certain bugs to be fixed or not.

To learn more about the challenges of automating DevSecOps and how to overcome them, take a look at our playbook.

Top comments (0)