Kubernetes Deployment spec examples

Hey there, Kubernetes fan.

When you’re working with Kubernetes, you’ll soon find that its declarative nature means that you’re going to apply a manifest to the cluster. A manifest describes an object that you want to exist in your cluster.

You write manifests for Kubernetes resources in JSON or YAML, and then use the Kubernetes API to apply to them to the cluster.

For me, the object I probably create most often is a Deployment, which has a rather large and complex spec.

In Kubernetes, a Deployment spec is a definition of a Deployment that you would like to exist in the cluster. It represents the state that the Deployment should have.

Writing these manifests manually is a bit of a slog. So sometimes it’s helpful to see what a real manifest looks like, so you can use it as starting point for your own.

And at the end of this article, I’ll show you some time-saving tools to use, so you don’t even need to write a manifest manually if you don’t want to.

So what’s the difference between a Kubernetes manifest and a spec?

A manifest is a JSON or YAML representation of an object that you want to exist in your Kubernetes cluster. Within the manifest, a field called spec describes the state that the object should have.

In this article I’ll show you a few real-world examples, so that you can see what a typical Deployment looks like. Hopefully this will give you a starting point for creating your own Kubernetes Deployment manifests.

Example Deployment YAMLs for Kubernetes

You can write manifests for Kubernetes in either JSON or YAML. I prefer YAML, because it’s less verbose and I don’t need to worry about missing brackets.

However the disavantage of YAML is that you need to pay special attention to your indenting!

How to convert these examples to JSON

If you prefer to write your Kubernetes manifests in JSON, you can convert these examples to JSON, using our YAML to JSON converter that runs in your web browser!

Or you can convert these examples locally with the yq command line tool. Once you’ve installed it, you can convert a YAML file to JSON by piping it to yq, like this:

cat deployment.yml | yq

And out will pop the JSON equivalent of your YAML file.

Deploying a simple application

This is a simple Pod spec, to begin. This Deployment creates replicas to run the Nginx web server. Each replica (a Pod) has just one container, which runs the Docker image nginx.

This will run the nginx image from Docker Hub.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        ports:
        - containerPort: 8080

Note how the labels in selector.matchLabels match the labels in template.metadata.labels block.

matchLabels is a query that allows the Deployment to find and manage the Pods it creates.

This is a very basic example, and is fine for deploying an off-the-shelf container from Docker Hub. But what happens when you want to provide some runtime configuration to the application?

Configuring your application with environment variables

When you want to provide runtime configuration to your container in Kubernetes, you might use environment variables. Env vars are a well-known way of providing configuration to a container.

You can set env vars directly on a Deployment. But, you don’t have to hardcode their values, especially if you use env vars to set something environment-specific, like a database connection string, or a hostname.

Instead, you can populate the environment variables with values from a ConfigMap or Secret.

This example Deployment shows how to deploy an app which has environment variables taken from a ConfigMap and a Secret:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres
          ports:
            - containerPort: 5432
          env:
            - name: POSTGRES_DB
              value: mydatabase           # an explicit env var value
            - name: POSTGRES_USER
              valueFrom:
                configMapKeyRef:          # populate from a ConfigMap
                  name: postgres-config   # ... with this name
                  key: my.username        # ... and look for this key
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:             # populate from a Secret
                  name: postgres-secret   # ... with this name
                  key: secret.password    # ... and look for this key

Externalising configuration is a really common pattern – one of the 12 Factor App principles – and a great one to follow for most apps. It means that you can set and update the ConfigMap in the cluster separately, without updating the Deployment itself.

Then, once your app is deployed, you might be concerned about how you’re going to upgrade it when you want to roll out a new version.

Configuring zero-downtime upgrades for your app

Zero-downtime upgrades are a really cool feature of Kubernetes. The rolling update strategy allows you to do this.

In a rolling update, Kubernetes will upgrade your Pods to a new container image version, whilst ensuring that at least one instance (replica) of your application is available at all times, so your users don’t experience downtime.

To take advantage of this, you configure a rolling update strategy on the Deployment. You do this by setting strategy.type to RollingUpdate. And you also set some parameters for the update, like whether Kubernetes can increase pods to higher than usual (this is called surging), to allow the upgrade to take place.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-rolling
  labels:
    app: nginx-rolling
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-rolling
  strategy:
    type: RollingUpdate   # Upgrade this application with a rolling strategy
    rollingUpdate:
      maxSurge: 1         # maximum number of pods that can be scheduled
                          # above the desired number of pods (replicas)
      maxUnavailable: 0   # the maximum number of pods that can be unavailable
                          # during the update
  template:
    metadata:
      labels:
        app: nginx-rolling
    spec:
      containers:
      - image: nginx
        name: nginx
        ports:
        - containerPort: 8080

Putting multiple Deployments in the same file

If you want to put more than one Kubernetes manifest (like a Deployment, Service, etc) in a single file, you can separate each entry with three hyphens (---). This is sometimes used if you want to keep all of your resources together; perhaps you want to track changes to a single file in Git.

Here’s an example of what I mean - I’m creating Deployments for Nginx and Postgresql, in the same YAML file. I separate the two manifests with the three hyphens:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        ports:
        - containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  labels:
    app: postgres
spec:
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - image: postgres
        name: postgres
        ports:
        - containerPort: 5432

This is a neat trick, and means you can distribute just one YAML file containing all of your application’s manifests, if you want.

You don’t need to memorise all this YAML

These examples have shown how to create Deployments which use some Kubernetes features, like configuration management and upgrades.

But, here some good news: you don’t need to memorise all this YAML, or write Kubernetes manifests manually. There are tools that can help you get there more quickly, like these:

  • kubectl, the Kubernetes CLI tool, can create a basic Deployment and apply it to the cluster. Just use kubectl create deployment .... If you just want the YAML, use the -o yaml and --dry-run options. Check the Help (-h) for the options and examples.

  • Eclipse JKube - this plugin for Maven generates Kubernetes manifests for your Java applications. It creates Deployments, Services, and more.

  • Kubernetes YAML Generator - this web app from Octopus helps you generate a Deployment YAML in your browser. It covers almost every option imaginable!

You can always consult the official spec, using kubectl

Whenever you want to see the whole spec, and see all of the options that you can configure, you can get the full Deployment object specification with the kubectl tool.

Type kubectl explain deployment and you’ll see the top-level structure of a Deployment:

$ kubectl explain deployment
KIND:     Deployment
VERSION:  apps/v1

DESCRIPTION:
     Deployment enables declarative updates for Pods and ReplicaSets.

FIELDS:
   apiVersion	<string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

...continues...

You can fetch further layers in the spec, like this: kubectl explain deployment.spec. You can keep using this technique to drill right down into the spec.

Or, you can add the --recursive option which will dump out an outline of the whole spec, e.g. kubectl explain deployment.spec --recursive.

This is a really handy tip that I use all the time!

And, with that, I wish you happy Deployment spec-writing. ʕ ·͡ᴥ· ʔ

Tom Donohue

By Tom Donohue, Editor | Twitter | LinkedIn

Tom is the founder of Tutorial Works. He’s an engineer and open source advocate. He uses the blog as a vehicle for sharing tutorials, writing about technology and talking about himself in the third person. His very first computer was an Acorn Electron.

Join the discussion

Got some thoughts on what you've just read? Want to know what other people think? Or is there anything technically wrong with the article? (We'd love to know so that we can correct it!) Join the conversation and leave a comment.

Comments are moderated.