As some of you might remember, we wrote a piece on how to run highly available WordPress sites back in June 2015. Whoa, that long ago. Kontena, Docker and the whole container ecosystem has seen a tremendous numbers of changes from those days so we thought it's time to give it another shot and see if we'd make some things differently now.

One of the biggest changes, when thinking about how to run WordPress on containers, has been the developments around volumes. Back in 2015, pretty much the only way one could separate the lifecycle of the containers and data was to use a pattern called data-only containers. Yes, it's still usable as such, but has the major drawback of pinning the data to a certain node only. Now we can use named volumes and there's an entire ecosystem around volume plugins that offers different kinds of storage capabilities as volumes.

WordPress container image

Back in 2015, there already was an official WordPress image available in Docker hub. And it's still available today.

When I started to look at "modernizing" the WordPress deployment, I tried hard to use the official image. But in my opinion it has a couple of quite major drawbacks:

The official image uses a startup script that determines the state of the "installation" and copies over the basic WordPress files during the first startup. While this might work pretty ok in single-node types of setups, it can cause a lot of trouble in clustered environments. And surprisingly, this method does NOT even support WordPress upgrades at all, yet.

If you've been reading my blogs and/or seen me speak at conferences and events, you've noticed that continuous delivery is one of my favorite topics. And from that point of view, this kind of on-the-fly copying of content gives me nightmares. If during the container startup we copy the WordPress files within the container, how can we be sure it actually works? As you roll out your new version into production, you've got absolutely no idea if it will work or not as the file copying happens only at the startup phase.

I guess one of the reasons why it has been done like this is to have the possibility to separate the plugin, etc., installation state from the lifecycle of the WordPress container itself, kinda like the data-only container pattern. Maybe I'm a container purist, but for me a container itself is an immutable thing, you don't go and change the contents of it during runtime. It has all the things, in the image already, needed to run the thing your want to run, nothing more, nothing less. Full stop. I think I could rant for hours on this topic... :D

Enough ranting, show me how to do things better.

The image

So I did go and build my own image that bundles everything into the image itself. Which of course contains the basic WordPress stuff, any custom configs and of course, any custom plugins, themes, pages, etc. you want to run within WordPress. I threw in also the wonderful wp-cli tool to allow for easier operations with the WordPress setup.

Check out the full demo repo at GitHub.

I was not able to use the official image as a base since Docker does not support un-defining the VOLUME instructions. Hence you'll see lot of things done similarly as in the base image. :)

Running it locally

Running the sample site locally is pretty straight-forward with Docker Compose:

version: '2'

    build: .
      - "8080:80"
       WORDPRESS_DB_HOST: mysql:3306
       WORDPRESS_DB_PASSWORD: wordpress
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_DATABASE: wordpress
       WORDPRESS_SITE_URL: "localhost:8080"
       WORDPRESS_SITE_DESCRIPTION: "Wanna-be troutbum who dreams of big fish"
      - ./theme:/var/www/html/wp-content/themes/my-theme

    image: mysql:5.7
       - db_data:/var/lib/mysql
      MYSQL_ROOT_PASSWORD: wordpress
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: wordpress


To allow for easy theme development, I go and mount the theme folder into the container. No need to re-build and restart the container for every single change.

Running it in production

Of course, running things in a production like environment is much more complex and therefore also more interesting. But with the custom image, I think I've made a lot of things simpler than with the original official image. Pretty much the only "state" we need to worry about is the uploads data. When you're managing your posts, pages and whatnot in WordPress admin, you can upload new data to be used.

And now for this data we need to solve two issues:

  1. Persist it separately from the containers
  2. Make it available for many containers, across many servers

It's no big surprise that we can use modern container volume concepts to actually solve both these issues.

Salt, Nonce and others

WordPress requires quite a few secrets to operate securely.


The new Kontena stack variables and the capability of generating random secrets gives us a neat way to handle these:

    type: string
      vault: ${STACK}-auth-secret
      random_string: 24
      vault: ${STACK}-auth-secret

    - secret: ${STACK}-auth-secret
        name: WORDPRESS_AUTH_KEY
        type: env

Repeat-and-rinse for all secrets and we've got a secured WordPress setup in our hands.

Using volume to store upload data

So to be able to share the upload data between many container instances possibly running on different nodes, we really need to be able to use some sort of networked storage. In one of the previous posts I introduced a way to use S3-like storage in the containers. That would work too, but in this case it has two small bumps I want to avoid:

  • it requires a volume plugin, I want to avoid that "complexity" in this case
  • S3 is not the fastest storage out there, and we need to be able to serve our media data pretty fast.

The third reason why I wanted to use something else is to reveal one of the less known "secrets" of Docker and volumes. Well, it's really no secret at all, but almost feels like one as people just haven't thought that you can do things like this.

So, I'll be using AWS EFS for the storage. EFS is something you can mount as NFS drives within the VPC you provision them into. But I need then some NFS volume driver to use it? That's the "secret"; you don't.

It's very well hidden* in the Docker documentation that one can use the local driver to mount pretty much any storage as a volume that you could mount as a root on the host node. In this case we need to mount an existing NFS (the EFS in AWS) to be used as the storage for the uploads.

*) Well enough that I was not able to find and link it here. I remember seeing it there though, so I know it exists.

When orchestrating the production system using Kontena, you'd create the volume definition like so:

$ kontena volume create --driver local \
    --driver-opt "o=addr=<efs-address>,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2" \
    --driver-opt "device=:/" \
    --driver-opt "type=nfs" \
    --scope stack wp-uploads

So really, we do not need to set up any new volume plugins on any of the nodes. Now Kontena will schedule the volume with our WordPress containers, instruct Docker to always go and mount the same EFS "drive" regardless of the node we are running in. Handy.

As we have everything else build into the image itself, we "just" need to use this NFS (EFS) backed volume to hold the upload data:

    image: jnummelin/wordpress:edge
    build: .
      - ${lb}
      - mysql
      - wp-uploads:/var/www/html/wp-content/uploads

    external: true

VĂ³ila, everything we upload during the WordPress lifetime will now be persisted in the NFS storage. We can easily scale our WordPress up and down as we need and we can be sure the data follows.

The database

The database is one of the most critical pieces of this solution. There's at least two options I see:

  1. Use some hosted service for the database, AWS RDS, Compose, Aiven or somethings else.
  2. Pull in some ready-made database stack as a dependency in the WordPress stack.

If I'd be running a real production like setup, I'd probably go for numero uno. It just makes things so much easier to manage and operate. Actually, there's nothing stopping you from combining these options with some stack variables and Liquid templating magic. Use "embedded" database deployment for test & staging, but externally managed database for prod.

As in the original post, I used a Galera cluster as the database. This time as a dependent child stack:

    stack: galera.yml

Now when we go and install, or upgrade, the main stack it will automatically go and set up the Galera stack for us too.

Of course, the main WordPress needs to know some details of the child stack. For example, we can easily refer to the exposed service of the child stack with just the dependency name:

  WORDPRESS_DB_HOST: {{ galera }}

The child stack generates a random password for the database. We can easily pick it up on the main stack:

  # Secret generated in the child stack
  - secret: ${STACK}-galera-mysql-pwd
    type: env


The container ecosystem has grown a lot since 2015 when the original post was written. With the current setup we've been able to simplify many things and make the setup much more manageable. Technically the biggest advantage is achieved by using the build in Docker local driver to mount networked storage to be used as the upload storage. Using Kontena stacks, variables and dependency mechanism makes the whole setup really easy to manage and understand.

For me, especially as a CI/CD fanatic (or lunatic?), it makes perfect sense to build the WordPress image so that it has everything bundled in as opposed to dynamic copying and storing the plugins etc. on a shared volume. This of course has the drawback that you need to actually go and set up your desired plugins and such on the image itself. That makes them travel through your CI/CD pipeline too. Wait a minute, that's what we want, right? :D

As I'm not really a deep WordPress expert, I might have overlooked some things regarding the custom image I've built. If so, please let me know.

Image Credits: Bruno Martins.