Are you new to containers? Do you want to understand how you should best use the features of containers, or how you should architect your Java applications to make the most of containers?
Here are a few best practices which I try to stick to when developing Java applications that will run in Docker containers.
These tips are worth thinking about when you’re looking to put your Java application in a Docker container.
The joy of Java in containers
But first, know that you’re lucky.
If you’re a Java developer then you’re already half-way to containers, because you write code in a language that uses a virtual machine, and that often produces a single binary (a so-called “fat JAR”).
This is great, as you can already run a JAR file wherever Java is installed.
But Docker provides the icing on the cake, as it makes your application entirely portable, including the Java JVM. That means no need to worry about which version of the JRE is installed on the host, because the JRE is installed right inside the Docker image itself.
Making the right use of Docker
The reason why containers are great is not simply the technical features of Docker. It’s because containers make it easier to adopt some good practices.
To get the best value out of Docker, I recommend combining containers with an enthusiasm for:
If you “just” package your application in Docker, and do nothing else, you will be left wondering what the benefits are.
The benefits of containers are realised when you do all of these things together.
Having said that, there are some areas to focus on when you’ve already decided you want to use containers, and you want some tips.
Best practices for Java in Docker
So here’s my best practices for containerising your Java applications:
#1 - Separate configuration from code
I’ve put this at the top because I think it’s the most important thing to grasp when you’re Dockerising any application.
If your application depends on configuration – such as database usernames and passwords, or hostnames for talking to external APIs – then it should be provided at runtime.
What does this mean? It means that your application should expect its configuration in its environment. That configuration should be provided to the container, or injected into it at runtime, rather than it being “baked” or hardcoded into the Docker image itself at build time.
This separation forces you to be clear and explicit about what is configuration versus code.
When configuration is separate, you will be able to run the same Docker image in multiple environments simply by changing the code. If the configuration is built in the container image, then you will need to build a new image if you want to deploy in another environment.
Here are some of the most common ways that you provide configuration to a Java app in a container:
|Way to provide configuration||How to do it in Java|
|Mount a volume containing configuration files||e.g. Read
|Set environment variables in the container||e.g. Use
|Use a network-based configuration service||e.g. Spring Cloud Config|
|Give arguments to the JVM on startup by overriding the container’s entrypoint||e.g. Use
Defining your configuration externally will make it so much easier to run the same Docker image in any environment, with just a simple change of configuration.
Why: One of the benefits of containers is that they are portable. If you are building different Docker images for each target environment (dev, test, production) then you lose this portability, because each container image is tied to a specific environment.
When you build one image and provide any configuration as part of the environment of the container, you keep the image portable and you also ensure that you are running the same code in all environments. This makes it much easier to debug issues or recreate environments in future.
#2 - Automate all manual steps
This is probably the second most important best practice.
If your application requires some manual configuration steps when it starts up, then it’s best to automate those steps. You can either do this as part of your Docker image build (in the
Dockerfile), or in a script to be run on startup.
If you don’t automate this, you will need to perform these tasks manually each time you start a new container.
Two common ways you can automate configuration steps are:
Add the relevant files or binaries into the Docker image, by using instructions in the
Run a script on container startup which creates or edits files as needed, and then immediately launches your application.
Why? The reason manual configuration is a problem is because when you start running containers at scale, you will need to perform manual configuration every time the container starts.
This means it turns containers from “cattle” (disposable and designed for failure) into “pets” (a unique system that you need to manage manually).
If you treat your containers as pets, you will be reluctant to destroy or restart them, because of the manual effort required to maintain them.
#3 - Use a well-known, maintained base image
When you put your Java application into a Docker container, you’re probably going to be building on an existing Docker image. That means, you’re going to reference someone else’s Docker image in the
FROM statement in your Dockerfile.
When you’re extending or building on top of another image, choosing a well-known and well-maintained base image is absolutely essential.
I’m not going to risk using an image if I don’t trust its creator or maintainer. To me, that’s juts asking for trouble. You need to be assured that the image has been patched for security issues, and that it isn’t going to contain any malware when it’s deployed on your production servers. This is the price we pay for the convenience of pulling images from public registries.
So which image should you pick?
If you already pay for support from a software vendor like Oracle, Red Hat, Azul or others, then that’s the gold standard! If you don’t, then use certified images on Docker Hub.
You can also use the distroless images from Google’s
distrolessproject, which are based on Debian and have absolute minimum tools installed in the image, reducing the overall potential surface for attack.
If you don’t want to use one of the community base images, consider using a minimal Linux base image like Alpine, and then add in the JDK yourself.
With any of these options, it’s critical to make a plan for how you will update your image when new versions of Java are released. This keeps you up to date with security updates.
#4 - Use a Java framework designed for containers
If I’m starting a new app, and I have the opportunity to pick a framework, I will pick a modern framework which is easy to run in containers, or is designed for microservices, such as:
Why: Why should you use one of these frameworks? Because they include things like an embedded web server out of the box, support for reading configuration from the environment, and many more features which will make it easier to run your Java apps in a container.
#5 - Aim for a smaller image size
One thing I try to aim for in Docker images is a small file size. It’s certainly not essential, but it will make your images quicker to pull and run, and reduces the surface area of your containers that may be vulnerable to attack.
You can reduce the image size in a few ways:
Exclude your build dependencies from the final image: When I build a Java application with Maven, it tends to download the entire internet! If you then release that image to the world as the Docker image for your application, it will contain a bunch of unnecessary dependencies and the Maven binary.
To exclude Maven and all those dependencies from your final image, you can use a Docker multi-stage build, which will build your JAR using one container image, and then inject the JAR into another base image, so you keep your final image slim and small.
Build your own slim Java runtime: In newer versions of Java, you have the option to use modularity to reduce the overall size of the JVM, by removing modules that you don’t use. The
jlinkcommand assembles the modules you need into a custom Java runtime.
Use Dockerignore file: This file tells Docker to not copy unwanted files into the image. Make use of it to keep the files in the final image to a minimum.
Use a smaller base image: Finally, using a smaller base image will of course also help keep the final image size small. Alpine or Google’s Distroless are small.
#6 - Use Java 1.8.0_191 or later, preferably 11+
Java previously had some quirks while running in containers.
Java used to have difficulty understanding how much memory it was able to consume. It often ended up consuming too much memory, which would cause the container to be shut down by Docker. Or, it would be unaware of the number of CPUs it was working with.
Thankfully, these issues are resolved in more recent builds of Java, but you’ll need to make sure you’re using Java SE Development Kit 8, Update 191 (JDK 8u191), which was released in October 2018. You can also use any later version, such as Java 11, which is another long-term supported release.
Preferably of course, go for at least Java 11, which has significant memory and CPU improvements.
#7 - Use a specialised build tool for Java
The universal approach of using a Dockerfile is fine, but I think there are much better ways of building Docker containers as part of your Java build.
If you use a specialised Docker image build tool for Java, you will benefit from integration with your build tool (e.g. Maven or Gradle) and some opinionated defaults for your Java apps.
Here are two of the best options for building containers for Java applications right now:
Jib is a tool from Google for building better Docker and OCI images for Java applications. It doesn’t require the Docker daemon, and also helps you to create quite granular builds, because it can separate your Java application into multiple layers. This means builds can be faster because only the layers that changed need to be deployed.
Eclipse JKube hooks into your Maven build process, and can build container images using Docker or JIB, or S2I (in OpenShift). It can help you deploy your container to Kubernetes, too.
#8 - Build one image for all environments
One of things I love about Docker containers is that they are designed to run the same, anywhere. This means that I build an image once, and run it in all of our environments – development, test, production – by just changing config.
In your CI/CD pipeline, build just one Docker image. Then, deploy this image into each target environment, using your externalised configuration (environment variables, etc.) to control the behaviour of your app in each environment.
Building one image per environment is a bit of a container anti-pattern and should really be avoided.
#9 - Tune your JVM parameters for the container
As always, pay attention to your JVM parameters. These allow you to control everything from maximum heap size to garbage collection.
You might need to reduce your application’s memory usage. Unlike virtual machines, where we might have an entire host dedicated to our application (and lots of memory overhead), containers are often packed together onto a host to make best use of available resources. So we might need to reduce our memory consumption. Take a look at your
-Xmx settings for heap size.
You can also use something like Java Buildpack Memory Calculator to create the right JVM parameters to pass to the Java Virtual Machine on startup.
#10 - Single, immutable concern per container
If you’re used to running a Java application server like JBoss or WebLogic, loaded with a ton of applications, then you might want to think again.
Containerisation is about creating a single unit of deployment, or a single concern.
Additionally, you shouldn’t be modifying a container once it’s started up, either to deploy new applications, undeploy existing ones, or modify/upgrade them. Instead, build a new container image with your changes, shut down the existing container, and start a new container with the new image.
#11 - Aim for fast start-up times
Making sure your application started quickly has always been a goal. But it’s even more important in containers, which are often run at scale, and moved around between different servers like cattle.
So, some tips for fast start-up times:
Compile to a native binary using GraalVM: This gives you very fast startup times, and will be especially interesting to you if you’re considering running Java in a serverless context.
Be aware of any tuning you can do on your framework: Spring Boot, for example, can take some time to start, if it’s a large application.
Use Java’s “Class data sharing” (CDS) feature: This reduces the startup time for your Java applications, by reducing the JVM’s class-loading workload.
#12 - If your application is big, use layers efficiently
Docker images are composed of multiple layers, which are then merged together when the container starts.
If you’re using Spring Boot, and your application is rather large, you can make your Docker image even more efficient by using Cloud Native Buildpacks. This allows you to split your Java application’s Docker image into layers.
So if you have dependencies that are fairly static, you can define these in their own layer. And then you can have a separate layer for your application code, because you’re likely to change this more frequently (e.g. bug fixes, everyday enhancements).
If you are using a fat-JAR style build, you might get performance improvements by switching back to thin-JAR. But be aware that this might come at the cost of losing track of your dependencies. The benefit of building fat-JAR applications is that your JAR file includes all of the dependencies it needs to run.
#13 - Running Java on Alpine: musl vs glibc
Be aware that if you use Alpine Linux as your container base image, you’re using
musl and not
glibc (the GNU C Library). This might have some impact on your application’s performance and, more importantly, its supportability.
Google’s distroless image, and most other supported Java images, use
Take a look at Project Portola which aims to provide a port of the JDK to the Alpine Linux distribution, and specifically for
musl. However, running the OpenJDK port for Alpine is not supported yet.
#14 - Monitoring and observability
When you start running Docker containers, you’re going to want to know what’s running inside. This is about being able to monitor your applications and use observability to see what’s going on.
You will want to think about finding a way to automatically add these capabilities to your containers, so that you aren’t manually adding the tools for every application you build. A research paper found that 90% of Docker Java containers would benefit from better observability by automatically adding observability tools to Dockerfiles.
If you’re looking for tools to add, you could try:
Jolokia for making JMX available over HTTP (so you can inspect your application using a REST API)
Hawtio, an open source web console for managing your application
Prometheus, for an open source monitoring solution for your fleet of containers
In this article I’ve covered lots of best practices that I’ve picked up when compiling and running Java applications in Docker.
The most important things to focus on are keeping your images portable (through external configuration), slim and secure (through using the right base image). You should also think about optimising your app for a fast start-up time, and at least be running Java 8, preferably Java 11.