DEV Community

a little Fe2O3.nH2O-y
a little Fe2O3.nH2O-y

Posted on

DevSecCon 2019: CI/CD write-up

In December I was lucky enough to attend DevSecCon 2019 in London through work, and had a blast. It was my first non-language/framework conference and it was really interesting seeing the variety of topics that were on the agenda.

My favourite session though, was Securing the Sugar out of Azure DevOps, given by Colin Domoney of Veracode. I hadn't used Azure Pipelines before so a lot of it was me just getting used to the ACL system and getting a basic pipeline together, and hearing Colin talk about some of the possibilities they'd explored regarding different security practices in CI/CD pipelines. I took notes and thought I'd share some of the learning from it here.

Our two aims are to shift left as far as possible (bad news doesn't age well) and automate absolutely everything we can (don't do anything manually three times).

Secrets Checking

Public GitHub repos are constantly scanned for credentials and I've personally committed a few myself, only to have my (friend's, sorry Chris) account for some service get locked because of it. We can use a tool like TruffleHog in our pre-commit hook to make sure we haven't committed anything personal. Of course, our .gitignores should be checked and could go through manual approval for changes.

GitOps: Push all the Things

Briefly touched on in that session, but something I picked up at another DevSec meetup was the concept of GitOps, or infrastructure-as-code taken to the extreme. The more we have defined as code in our repositories the more we can instantly and easily validate and verify earlier in the process. The same goes for using services which have good APIs - if we can pull a VPC config from the AWS CLI and validate that only those ports are open, great!

On the topic of secrets, one tool that I haven't played with but got an honourable mention was AGWA/git-crypt which using a .git-crypt file configures which files should be transparently encrypted on commit, and decrypted on checkout. A really cool concept meaning that our devs can develop and push application secrets like other files, and they'll be encrypted in our repository, staying that way if you don't have the key!

Open-Source Scanning

Done to death, but essential. These are tools which check the versions of any open-source components we're using, and if a signature has been found to contain vulnerabilities we stop the build. These broadly fall into the categories of image scanners like docker/docker-bench-security and dependency checkers like OWASP Dependency Checker.

I swear there's another NPM/GitHub/X account takeover or malicious injection article topping HackerNews every other week. We want to take as much community-sourced intelligence as possible, and these are a great source of it.

Static Application Security Testing (SAST)

I was recently asked how I'd do SAST in an environment where your company can't simply throw money at the problem and I was speechless. It hadn't occurred to me that budget was a concern for some Cyber departments, and I cobbled together an answer about open-source alternatives to Fortify and CheckMarx, making a note to look more into this scene. Heavily depending on language, some contenders would be:

It goes without saying, but we should heavily focus on unit test-based philosophies in our software development for a lot of reasons, and your SAST stack is just there to catch left-overs.

Dynamic Application Security Testing (DAST)

At my company we have an amazing pentest team whose time gets booked out for every significant release of a project, unless a member of the sec team pre-approves the change. These tools will typically involve spinning up a container with a version of the application and automating attacks against it, fuzzing input or looking for changes in route responses from previous scans.

Functional tests can be prepared by the developers and integrated in their CI/CD - they know their code best and can protect against common attacks early. Of course, we don't want to presume they added a check on something they might not have, so coming back to the resource problem, we can go some of the way with solutions like:

Commercial or open-source, we broadly have three strategies for adding SAST/DAST into CI/CD:

  1. Synchronous - on build we run our tool and simply wait for it to finish. This is great because we can fail or succeed our build on the back of results, but not great if our tool takes an hour and we want to release multiple times a day
  2. Asynchronous - on build we kick off our tool in another process and proceed with the build. In the event of failure we flag the build (depending on CI/CD tool) as failed and rollback the release to the last stable build.
  3. Mixed - we select some balance of the two, potentially running file analysis or faster tools in-band and slower tools out-of-band.

Secrets Management

Having worked around Data Protection and Applied Cryptography for the last 2 years I love talking to whoever will listen about the applications of HashiCorp Vault or AWS KMS or Azure Key Vault. These solutions allow us to wrap all of our application secrets in a central service, cloud or self-hosted, which we can secure as much as possible and control access to. We can enforce minimum key lengths and require TLS, maintaining an actual ACL for our secrets, and recording activity statistics. There's lots of reasons we should be using at least some of the engines of HashiCorp Vault, personally my favourite use case I've heard is having ultra-shortlived TLS certificates, reducing validity periods down to minutes. If the service still meets criteria, it makes a new request to Vault where a new certificate is signed and returned, and the application starts serving it. Amazing!

There's also Secrets plugins for most self-hosted CI/CD solutions, including Jenkins Credentials, GoCD File-based Secrets and ConcourseCI Credential Management.


Thanks again to Colin, and to the organisers - it was a great event and I'd love to attend again in 2020.

Top comments (0)