DEV Community

Cover image for The No-Nonsense Guide to JVM 14 Memory on Kubernetes
James McMahon for Focused

Posted on • Updated on • Originally published at focusedlabs.io

The No-Nonsense Guide to JVM 14 Memory on Kubernetes

This article is born out of frustration I had getting a direct answer to the question of how the modern JVM handles memory on containers. Let's get straight down to brass tacks.

The JVM as of JDK 14 has UseContainerSupport turned on by default. Here are some interesting defaults.

You can see these yourself by running,

docker run -m 1GB openjdk:14 java \
            -XX:+PrintFlagsFinal -version \
            | grep -E "UseContainerSupport | InitialRAMPercentage | MaxRAMPercentage | MinRAMPercentage | MinHeapFreeRatio | MaxHeapFreeRatio"
Enter fullscreen mode Exit fullscreen mode

Outputs:

    double InitialRAMPercentage                      = 1.562500                                  {product} {default}
      uintx MaxHeapFreeRatio                         = 70                                     {manageable} {default}
     double MaxRAMPercentage                         = 25.000000                                 {product} {default}
      uintx MinHeapFreeRatio                         = 40                                     {manageable} {default}
     double MinRAMPercentage                         = 50.000000                                 {product} {default}
       bool UseContainerSupport                      = true                                      {product} {default}
  openjdk version "14.0.2" 2020-07-14
  OpenJDK Runtime Environment (build 14.0.2+12-46)
  OpenJDK 64-Bit Server VM (build 14.0.2+12-46, mixed mode, sharing)
Enter fullscreen mode Exit fullscreen mode

MaxRAMPercentage is key here. The JVM defaults to 25%.

MinRAMPercentage and InitialRAMPercentage are tricky, this Stackoverflow answer is the best explanation I've read so far.

Here is a quick summary of that post:

InitialRAMPercentage - Used if InitialHeapSize and Xms are not set. In this case, if InitialHeapSize is 0. Source reference. I've never had much luck getting this one to work as expected.

MinRAMPercentage - Used if MaxHeapSize and Xmx are not set. In this case, not set meaning, default value for MaxHeapSize and Xmx absent. Source reference.

MaxRAMPercentage and Kubernetes

The observations below are based on some tests using a simple memory testing application I wrote.

You can set MaxRAMPercentage (and other JVM arguments) by editing your dockerfile:

ENTRYPOINT 
["java","-XX:InitialRAMPercentage=10","-XX:MaxRAMPercentage=75","-jar","/app.jar"]
Enter fullscreen mode Exit fullscreen mode

Limits and requests are set using

  resources:
    limits:
      memory: 2Gi
    requests:
      memory: 2Gi
Enter fullscreen mode Exit fullscreen mode

If no limit are set on the pod

The JVM will use up to the MaxRAMPercentage of the Node's memory. If the pod approaches the Node's memory limit, Kubernetes will kill the pod rather than throwing an OOM Exception.

Limits set on the container

For less than 100% MaxRAMPercentage

The JVM will use up to the MaxRAMPercentage of the limit. The pod is not killed, instead of throwing an OOM exception.

For 100% MaxRAMPercentage

The JVM will use up 100% of the memory limit. I saw this occasionally throw an OOM exception. With continued memory pressure Kubernetes will kill the pod.

What to set as max?

Assuming you are running a single container per pod, 25% as a default seems relatively low for MaxRAMPercentage. So what should you set it to for your application?

This question doesn't have a cut and dry answer. I can say 100% is a bad idea, but what's optimal is going to depend on the memory footprint of your application (with the JVM using system memory for Metaspace) and the container you are using.

I've heard some advice that states that you should leave at least 1gig for the OS at all times. I've also heard 75% recommended. Personally, we are using 80% for our application but I am also keeping an eye on and adjusting as needed. The truth is you are going to need to test out a few different configurations for your app and see what works for you.

What about requests?

Requests are an intresting one. I haven't seen them affect the start heap size of the JVM so they are just used for Node scheduling like any other K8 pod. The JVM tends to increase heap size and then never give it back, or very slowly give them back depending on your settings. (Heap reclamation is a large enough subject for another post entirely.)

Personally I've been setting requests to 50% of my limit. Depending on how fast your application is hitting that max heap I could also see setting this to 100% of the limit to help Kubernetes put your application on the approriate node.


That is the short version of a bunch of research, and as close to TLDR I could get for a complex topic. With the JDK 15 right around the corner, I will be looking to revise if there are any notable changes.

Any tips I missed? Let me know below.

Additional Resources / References

Note that some of these pertain to the JDK pre-14.

Top comments (0)