DEV Community

Gary Robinson
Gary Robinson

Posted on

The all-important triaging of security issues

Security tools can be noisy. In 20 years, we haven’t seen a single security tool return a set of issues that are 100% what needs to be worked on. Ultimately, there are a few main aspects to triaging lists of security issues to achieve better results from your tools.

In this post, we’re discussing the issues of triage software testing and how these problems can be solved.

  • Why certain issues may not need to be addressed
  • Security issue backlogs
  • How the Uleska Platform facilitates aspects of DevSecOps triaging


There are a few reasons why issues returned from a security tool may not need to be handled. These are:

1. False positives
For years, this has been one of the biggest complaints against security tools. Maybe they flag a port you want to be opened or they’re simply a result of the security tool engine logic thinking there’s an issue pattern.

2. The risks aren’t real
Sometimes a tool raises an issue that may be a problem in theory but just isn’t something that needs to be prioritised right now. We’ve all seen the flagging of some security header or a security issue in a library/framework that couldn’t be exploited on your system because you don’t use the relevant function in the library, or your security policy doesn’t require the use of the security header.

3. Duplicates
Now that our DevSecOps processes are running multiple tools, there’s a good chance we’ll see the same issue raised a few times. Different tools will focus on differing security checks, though will still likely have some overlap. We also see this often with different container or SCA security tools that use different vulnerability databases.

4. Duplicate occurrences
We typically see this when there’s some issue in the header or footer of a website. This often results in you getting the same issue raised for every page on your website, even though you’ll only need to apply one fix to solve it.


The concept of a ‘security backlog’ is changing with DevSecOps, due to the frequency of changes and testing. It’s then especially important for the DevSecOps process to determine the difference between this latest list of issues from the current change, and the previous list of security issues.

In the past, security tests would be handled manually, maybe every 6 or 12 months and would result in a long list of issues. This list would be triaged for false positives, leaving a list of challenges to work on.

This resulting list of issues would then be fixed and remain resolved for another 6-12 months. Any issues left would just be accepted risk.

The frequency of DevSecOps changes this narrative. There will likely always be a backlog of security issues to be worked on for every project and this would be reflected in the issues being returned by DevSecOps tools. Yet that backlog will constantly be changing from release to release. As an example, if our Tuesday 10am pipeline returned 50 issues - what does that mean?

Does it mean:

  • We already had 50 security issues and nothing has changed?
  • Before this pipeline scan, we’d fixed all issues and 50 brand new security bugs were introduced on Tuesday morning?
  • Before this pipeline scan, we had 60 issues and we’ve fixed 10?
  • Before this pipeline scan, we had 20 issues and introduced 30 new security bugs?
  • Before this pipeline scan we had 50 issues, but some have been fixed while new issues are introduced and the risk profile has changed?

If we assume we’re always going to have that backlog, then part of our task in DevSecOps is to easily determine the delta from the last security run - this represents the difference in issues (or risk) caused by the current code change being run through the pipeline. If we’ve introduced a few new issues, now is the time to flag, either by email/Slack/Jira or in CI/CD, since the new risk isn’t ‘real’ yet, because it hasn’t been deployed. This means the dev team can be aware and fix issues quickly since they’re in the code and the issue doesn’t become an exploitable risk by going live.

However, current security tools are not designed for this. Security tools are designed to run security engines, determine issues and provide a list of them back to you. Now we’re in a situation of running multiple tools and linking all of these lists for someone to work out what’s changed in our security backlog.

At the end of the day, it’s this ‘what’s changed?’ that’s important to security and the business. Many security policies state things such as “changes can’t go live with critical issues,” or “if the risk increases by 20%, or goes over X threshold, then it should be run past Y team.” This is the business end of the guardrail that DevSecOps provides.

So in the mindset of DevSecOps and in the timeframe of the pipeline, we need to:

  • Remove (or re-remove) false positives, duplicates, non-issues.
  • Compare the new set of ‘real’ issues against previous (using context).
  • Make a decision based on our security policy on how to handle any change.

This means DevSecOps technology should have the ability to perform this consolidated vulnerability management across multiple security tools. Keep in mind this includes all tools - commercial, open-source and all those custom tools and scripts every business relies upon.

You’ll also notice a positive cultural effect this will have between security and development. As mentioned previously, developers don’t want to handle long lists of issues in every release.

By automating security triaging, that conversation changes to only flagging ‘real’ issues during the pipeline - this is much more manageable and enables rather than distracts. If the security pipeline can be easily extended to allow devs to run them in feature and other branches, before merging to main, then this is an even better collaboration that empowers development and saves a lot of time for security teams.


Firstly, for every security toolkit run, it collects all of the issues together into a single database and format. This list can then have false positives, duplicates and non-issues flagged via the UI or API, leaving only the real issues that affect the project.

Future runs of the same security tools will continue to return these invalid issues, but they’ll simply be marked as invalid issues again and kept out of reports and communications with other teams. This saves a lot of time for both security and development teams, whilst improving the quality of results for everyone.

For the second aspect of comparing lists of issues against previous runs, the Uleska Platform automatically stores all historical issue lists. When a new run is conducted, the lists of issues can be compared to instantly find new or fixed issues.

This can be quickly returned to the CI/CD pipeline for any decisions to be made and is stored in metrics so continual performance, statistics and improvements can be tracked. Trying to track this information across many tools would be a full-time job. That’s why we’ve automated it, so teams don’t waste time collecting and comparing lists of issues every release.

Instead, logic in the CI/CD pipeline can be applied to make decisions based on the consolidated information passed back by the Uleska Platform, without security resources being utilised.

Top comments (0)