DEV Community

Cover image for Malware Scanning on the Puppet Forge
Puppet Ecosystem for puppet

Posted on

Malware Scanning on the Puppet Forge

Another supply chain attack hit the news last week. This time it was a configuration management agent installed on most Azure images. I'm not bothering to link to it because it's just another in a long chain of attacks this year and, by the time this is published, there will likely be another one. Maybe this time it will be yet another background bitcoin miner hiding inside an innocuous NPM package.

I've been in the Puppet ecosystem for over a decade deploying Forge modules all over the world, but to tell the truth, the idea of running un-audited code has always bothered me just a bit. When you yum or apt install an open source package, you can trust that it went through an audit process with at least a few pairs of eyeballs on it before getting to the repositories. But now with the rise of peer-to-peer package repositories like NPM, PyPI, or RubyGems, that assumption is gone. And Golang effectively does away with the repository altogether! That's a lot of trust we put into the publishers of these packages.

The Puppet Forge Module Ecosystem

So far the Puppet ecosystem has not been attacked with malicious code injected into Forge modules. But we cannot assume this will always be the case. The Forge team has been hard at work for the past few months building out a malware scanning framework.

Now, to be clear, this doesn't replace your own security mitigations. You should still audit untrusted code. You should still run your own virus protections. There are many layers in a robust security profile, and this is only one of them. But what we can do is give you relatively high confidence that the Puppet modules you use are not introducing malware themselves.

We had to first put some thought into what it meant to be a secure Puppet module and balance that against what we could actually test for, programmatically. One of the challenges of scanning configuration management code is there's inherently so much overlap with malicious code already! It's a bit like the definition of a "weed" being an unwanted plant, no more and no less. It's all about context.

For example, downloading a tarball and running a shell command is exactly how a bunch of Tomcat modules work, so we can't quite eliminate that behavior. But we can identify when file resources use world-writable permissions, or wide open firewall rules, or known insecure Apache configurations, and we can flag when known malware files or URLs are included in a module itself.

Then we had to identify the value and the level of effort of writing checks for each of these cases then rank them to identify which would get us the most impact the soonest.

Quality Scores

Let's take a small detour and talk about the Forge quality scores for a moment. When we first introduced the automatic module quality analysis, it was implemented as a Jenkins pipeline, which obviously got more and more crufty as the years went by. Over the last year, we've been incrementally porting these pipelines to a more modern container-based workflow. These can be invoked anywhere with a container runtime or in any cloud service with a container orchestrator. Tangent: watch for another blog post showing you how to run these locally to predict your Forge quality score!

This workflow means that the cost of building new quality score analyses is now relatively low. Just update the backend and database schemas to accept the new scores, then build a frontend to present it, if needed. (And yes, this papers over a ton of work!)

But it also made it a perfect system for security analysis. Anything that we could build into a container could be turned into something that the Forge could use.

Building the Scanner

A bit of research indicated some prior art that we could build upon. Last year a team of students from the University of Pernambuco released a "security smell" scanner for Puppet code and prior to that, a student at FernUniversität in Hagen built a set of puppet-lint plugins to check for security issues. ClamAV, the popular malware scanner, also already had a container available.

Ultimately we decided that the malware scan would have the biggest impact for our users, and got to work. We quickly realized that while the ClamAV solution was useful, we'd get even more value with an enterprise subscription to VirusTotal — and our internal security team was stoked to get their hands on such a powerful tool.

VirusTotal provides an upload API. Although it doesn't know Puppet code natively, it knows how to uncompress the module tarball format and how to scan files and their contents for known malicious code. This meant that (again, papering over the gory details) that all we needed to do was upload the module itself and consume the results. VirusTotal aggregates scan results from over 70 antivirus scanners and URL blocklists, so as you can imagine, the results were quite — shall we say — "complete."

Rather than trying to parse the entire results object and trying to make sense of everything, we used the summary. If any of the scanners discovered malware, we flagged the module and marked it as passing the scan if no malware was detected by any scanner. We also link back to the VirusTotal results page so that users can see the details for themselves, if they'd like.

With the help of our Design team and some backend plumbing, we had a malware scanning solution to help you select modules that best fit your needs and security policies. Because we have thousands of module releases a year, we're starting small by scanning all of our Supported modules, then expanding to Partner Supported, and then Approved. But by the end of the year, we expect that every module release will be security scanned as it's published. To avoid zero-day vulnerabilities, we will not retroactively scan existing modules so be sure to look for the malware scan status when you're evaluating modules. See an example on our puppetlabs-postgresql module.

So what next?

The process of building the scanner identified a couple of interesting UX questions. At the top of that list was the realization that this was the first use of the quality scoring system that could actually block publication or put a module into an ambiguous state. Solving that problem means that we're now in a position to resolve some other outstanding feature requests. For example, users have been asking for the ability to "preview" module releases to check for rendering issues or quality scores prior to actually publishing a release and this is now a feature on our roadmap.

Revamping the quality scores themselves has also been on our roadmap for a bit. The security-focused lint checks we mentioned above will certainly play into that. It's still an open question whether we just roll all the lint checks in together or separate them out by topic. But the container-based workflow we talked about above means that either way we group them will be relatively easy to build.

In any case, we're excited to provide you another tool in your security arsenal and hope that it builds confidence in the content you use from the Puppet Forge. Watch this space for more exciting things coming down the line in the future.

  • Ben is the Forge and Ecosystem product manager at Puppet.
  • Nik is a Software Engineer on the Forge team at Puppet.

Oldest comments (0)