DEV Community

Leon Adato for New Relic

Posted on • Originally published at newrelic.com

Rational Shift-Left Security for Developers

(This article first appeared on the New Relic Blog)

The Day Started So Well, and Then…

So let's say you’re a dev. And you're sitting at your devvy desk doing devvy things, when the security team stops by and drops a fresh steaming pile of… “discoveries”... on you. You now have what amounts to a lose-lose decision to make:

Or you can ignore the security stuff you've just been handed, and risk a possible security breach - which would not only upset the business but maybe also lose you your job?

The solution, or so the industry thinks, is “shift left” - the act of pushing not only the work, but also pushing responsibility for identifying that work back toward the developer, rather than waiting for the development work to be done, passed to the next team, having the next team find problems, and THEN shove the work back at the developer as an interruption.

The problem is that, for many developers, the experience is fundamentally the same. Lacking built-in tools (or experience) to identify security threats as they’re being coded, developers still wait on some external factor to tell them when there’s a problem and what needs to be fixed.

To be absolutely clear: I'm not saying security issues aren't important. I'm not blaming the security team for doing their job. I'm not even blaming the fundamental premise of the "shift left" concept. At the same time, "shift left" only works if the person on the left (that would be you) is able to bear the load in a way that doesn't create impossible workloads or unachievable results. At the end of the day, this issue isn't going to be fixed by spouting platitudes about how "security is everyone's responsibility".

But make no mistake, it can be fixed. Like so many technical challenges, it can be fixed by a combination of re-contextualizing our work in a way that takes the interruption and turns it into part of our normal workflow; and also by adopting tools that will enable that new workflow.

Show first, tell later

This is the happy path, the journey which would reduce, if not remote, the burden of "shift left" responsibilities.

As I said at the start, I'm in my IDE, doing the work that I love. Suddenly I look over and notice an issue has been flagged on the sidebar.

Image description

But it's OK because this is part of my normal workflow. All kinds of relevant tasks show up there - from debug issues to pending commits to PR requests. And even (maybe especially) issues where my code is bug-free, but is experiencing a problem with performance or even security.

Which is the one I want to focus on at the moment - the critical item under "Vulnerabilities".

The View From New Relic

To really understand how this vulnerability is impacting my work, I'm going to start in the New Relic dashboard. Clicking into the summary for this application, I can see that it's been performing fairly well. But once again, that vulnerability is shouting at me - bright and red - from the top of the screen.

Image description

So I click into that, for more information:

Image description

The vulnerability summary page is able to show me, at a glance, how many vulnerabilities I have, how critical they are, how long the problem has existed, and more.

In this example, I have just one issue, so I click into it to see more:

Image description

From here I'm able to see the publicly available information (with links) on this issue. I also have a chance to see which entities are affected, by clicking the "Fix affected entities" button.

Image description

The key takeaway here is less about the known issues, and more about the surprises. We've all experienced the shock of finding out that our CI/CD pipeline failed to deploy to a specific cluster, container, pod, or region. And when that happens, the fix we so carefully crafted is fruitless.

Because New Relic Vulnerability Management is looking at what's running in memory, not just what appears in a repo, it's able to determine the results in production.

Staying in our happy place: the IDE

As Dorothy said, "If I ever go looking for details about a vulnerability, I won't look any further than my own IDE; because if it isn't there, I haven't had enough coffee to deal with it."

(Dorothy is a dev I work with. Did you think I meant someone else? But I digress.)

The truth is, I never had to take you on the journey through the New Relic VM dashboards. It was the best way to show the details of the vulnerability, but it took us out of the place where we do our work, which is the exact opposite of the happy path I described at the beginning of this blog.

Now that you understand the logic and the magic of VM, I can tell you that it can all be found in that sidebar element that told us about the vulnerability in the first place:

Image description

Simply clicking that item will show all the detailed information of what's wrong and which version I need to upgrade to in order to fix it.

I can't stress that last part enough. So many security tools will flag an issue and then simply list the latest version, without regard to the fact that our code is often many revs behind, often for very good reasons. "Upgrade to the current version" can mean hours of refactoring, regression testing, QA, and more. By listing the nearest viable upgrade, which is often just a couple of versions away from the one currently installed and therefore less likely to introduce massive changes, deprecations, etc, developers can make an informed assessment on the level of effort it will take to address the issue.

And so - with just a couple of clicks - we've identified the vulnerability, where in the codebase it's located, and how to fix it.

In this (admittedly simple) example, upgrading the library was all that was needed. Within about 5 minutes, everything is showing clean, both in New Relic:

Image description

And also in CodeStream

Image description

And you may ask yourself, "Well, how did I get here?"

With the entire journey completed, you may be curious how you can set this up yourself. That's what this section is all about.

First, you can use your own sample application, or use the one I've built.

Once you have your application working, the next step is to install New Relic APM. You can find instructions on how to do that, for your language, here. But a quicker way is to go into your New Relic portal, click "Add Data", and select your language from the list you see there.

Image description

Because my example was written in python, the installation is super simple. After giving the app a name

Image description

I have just two steps: First, in a terminal window I need to run:

pip install newrelic
Enter fullscreen mode Exit fullscreen mode

Second, I need to download the "newrelic.ini" file and copy it into the directory where my application lives.

Image description

You can either start your application in a special way so that it initializes New Relic and sends data; or you can modify your application code.

The commandline method is:

NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-program (your program)
Enter fullscreen mode Exit fullscreen mode

So if my application is main_run.py the command line would be:

NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-program main_run.py
Enter fullscreen mode Exit fullscreen mode

Alternatively, you can add a few lines inside your code that creates the hooks to New Relic. While the specific techniques will vary from language to language, a very small example for python would look like this:

import newrelic.agent 
newrelic.agent.initialize('newrelic.ini')

@newrelic.agent.background_task(name="task_name", group='group_name')
def execute_task():
(your actual code goes here)

execute_task()
newrelic.agent.shutdown_agent(timeout=10)

Enter fullscreen mode Exit fullscreen mode




Dude, where's my VM?

You may be asking how to install Vulnerability Manager itself and the answer is "you don't." VM doesn't pull new data, it creates new insights by leveraging the information we're already collecting.

Which means... you should be all set! If you open up your New Relic portal you should see data flowing in for that application. You'll find it under the "All Entities" menu, as well as under "APM & Services".

And if your code uses a library with a vulnerability in it, it will be picked up and flagged right there within the console.

The CodeStream Bonus round:

The process of setting up CodeStream probably deserves its own blog post, just to go over the capabilities and benefits. So for the sake of simplicity I'm just going to point you to the excellent documentation, which you'll find here.

But once you've added it into your IDE (Visual Studio Code, Visual Studio, or JetBrains), you simply have to connect it to your New Relic account and your github account to begin viewing everything from performance problems to pull requests to security issues right within your IDE. It's truly a game-changer.

What Have We Learned?

In order for security work to successfully “shift left” to developers, the right tools need to be in place that allow security issues to present themselves in the same way as other code quality issues like bugs and defects, and to be resolved as part of a development team’s normal work processes as well. More importantly, security tools work best when they’re integrated into the monitoring and observability systems that are already present in applications, reducing the risk of over-instrumentation. This proximity to the running code also allows security solutions like Vulnerability Management to identify issues in more than just the static code repository, and actually notify developers when there’s an issue pushing fixed code out to the actual production systems.

Top comments (1)

Collapse
 
fyodorio profile image
Fyodor

Looks like something is missing after the first paragraph, dude 😅