I have a Spring Boot application running an embedded TomCat server, pretty basic…

It’s running inside a Docker container. If I was to $ docker run the image (without memory restrictions), this is what it looks like:

docker ps
docker ps

~677mb container for a fairly simple Spring Boot app?! Surely I’m not the only one thinking there is something up here? So I started digging…

First I need to actually see what processes are running in the container.

docker exec my-app top -m
docker top -m

Aha! There’s a Gradle process running which is reporting an RSS (Resident set size) just as large as our Spring Boot application. Taking a look at the Dockerfile it’s easy to see why:

The launch CMD is using a gradle task to run the application. This process doesn’t just boot up the application, and then exit; it hangs about, along side the app process.

1

So the first memory optimisation is a simple one – get rid of that Gradle process. I just need to change from booting the app using a Gradle task, to directly executing a .jar file with Java.

CMD ["java", "-jar", "build/libs/{app name}.jar"]

Now let’s look at the top in the container…

No more Gradle process! So that’s shaved off 50% right there.

2

Even with my very limited knowledge of Spring and the Java runtime, I still think we can do better. ~382mb for a simple api, taking no traffic… I’m missing something surely.

Seems I am, you can specify to the runtime a heap size limit by using -Xmx56m. I imagine every Java dev knows this, but it’s new to me. Adding this argument to our $ java -jar command will limit the Java heap size to ~56mb. The runtime will try to keep it below that number by running garbage collections, amongst other things.

Careful with this, if the heap is limited too much it could cause thrashing, which funnily enough isn’t in the top 10 best ways to increase performance!

To set the heap size, I can assign the Xmx command to the JAVA_OPTS environment variable.

docker run -e "JAVA_OPTS=-Xmx52m" app-image

Seems legit right? Nope, this doesn’t work. Java seems to ignore our variable and boot up using defaults.

The reason for this lies with Spring Boot. Spring Boot will take any environment variable we pass it and make it available to the application – But our JAVA_OPTS isn’t meant for the application, it’s meant for the Java runtime itself. So we need to ‘exec java’ using the $JAVA_OPTS variable. This requires a small change to the Dockerfile.

ENTRYPOINT exec java $JAVA_OPTS -jar build/libs/{app name}.jar

I am using ‘exec’ in the command so that the child process it creates replaces the host process. Keeps the container tidy.

Now when we spin up a container, $JAVA_OPTS is passed to the runtime, as we wanted. Any other environment variable we set will of course still be picked up by the Spring Boot application.

Ultimately this gives me the ability to tune Java as I see fit. Here are the results after applying the -Xmx56m argument.

final ps results

Lessons learnt

Build tools

Don’t run the application using Gradle. It’s great during development, but the Gradle process hangs around and reserves memory unnecessarily, bumping up the cost of the container as a whole.

As Docker squeezes the container for memory, Gradle will give it up, but there is a good chance your container is going to eat up most of your hard limit unless it set high, or worse; if there is no limit set at all. So if you’re running this application on a cluster along side others, it’s not ideal.

I guess this rule applies to most platform build systems.

JAVA_OPTS

I don’t quite understand why I need to do this. Surely Spring Boot is optimised for efficiency and performance out the box? Why do I have to add this risk by putting a memory limit on the heap? Or a better question would be; why do the defaults want to hog so much memory? My lack of experience on the JVM prevents me from answering this, it just seems odd – But there we have it!