Speed up docker using NFS volumes

Containers not only change the way of deploying applications but, they affect the way we develop these apps. Thanks to Docker, we can run a variety of services on our local machines. No need to worry about installing extra apps, libraries, and other dependencies.

Docker for Mac, however, is much slower than it’s on Linux. Docker server uses lots of Linux kernel-specific features, so it’s hard to port it to systems like macOS or Windows. Today’s implementation of a docker server based on virtual machines comes with poor performance if it is not tuned for specific OS.

I’m going to show you how to improve the performance of the application running on Docker For Mac by using NFS volumes.

Why volumes on macOS work so slow?

Docker For Mac runs the docker server within the virtual machine. The host machine shares the file system with the VM using osxfs. According to the documentation:

There are a number of issues with the performance of directories bind-mounted with osxfs. In particular, writes of small blocks, and traversals of large directories are currently slow. Additionally, containers that perform large numbers of directory operations, such as repeated scans of large directory trees, may suffer from poor performance.

Projects based on Symfony or Laravel Framework consist of lots of files. Every piece of code and responsibility has dedicated classes. Moreover, they need extra libraries that are managed by the composer. Docker needs to keep in sync tons of files between the host machine and the container.

Tweak a configuration to get a better performance

Standard configuration

I have an application I want to run within Docker. It has some special dependencies I don’t want to install on my host machine. The docker-compose.yml definition looks as follows:

version: '3'

services:
    php-fpm:
        build:
            context: .
            dockerfile: .docker/php/Dockerfile
        volumes:
            - ./project:/app
            - .docker/php/php.ini:/usr/local/etc/php/php.ini:ro
    nginx:
        image: nginx:latest
        ports:
            - "8080:80"
        volumes:
            - ./project:/app
            - .docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro

I use separate containers for web server and PHP interpreter. Thanks to this approach, I can use a different version of PHP to test if my application works without touching the webserver.

Unfortunately, my app works so slow, that I have to wait for a couple of seconds to complete a single request. Moreover, my app is an API for the frontend app, so it has to handle many requests simultaneously. In the outcome, I have to wait for ~20 seconds before the frontend app is usable.

Delegated option to the rescue

Docker for Mac allows you to use a special configuration option for volumes to improve the performance. By default, Docker mounts each volume using a consistent option. It means, that everything you see on the host machine is the same as in the container.

The full consistency between host and container isn’t necessarily needed. From our perspective, we want to see our local changes as fast as possible on the container. If something changes in the container (e.g. app writes logs to the file), we can wait a bit because it doesn’t seem to be crucial.

To turn on the aforementioned behavior, you can use the delegated option. You can add it to the definition of volumes in docker-compose.yml.

version: '3'

services:
    php-fpm:
        build:
            context: .
            dockerfile: .docker/php/Dockerfile
        volumes:
            - ./project:/app:delegated
            - .docker/php/php.ini:/usr/local/etc/php/php.ini:ro
    nginx:
        image: nginx:latest
        ports:
            - "8080:80"
        volumes:
            - ./project:/app:delegated
            - .docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro

Restart containers and test if something changed. In most cases, the results should be noticeable and sufficient. If you want to achieve a better performance, you may try to mount volumes using NFS.

Setup NFS and mount volumes

Before Docker For Mac, I used to use a docker-machine to run docker server on macOS. It’s been using a virtual machine (VirtualBox) under-the-hood so it wasn’t so performant as well. However, I found a tool called docker-machine-nfs that mounts the filesystem to the VM by NFS. It increased the overall performance dramatically.

The docker-compose allows defining the NFS volume that can be mounted to the container. The only thing you need to do is to configure and expose your NFS server.

Prepare your filesystem

If you want to mount volumes from NFS, you need to expose selected directories in /etc/export file. Some people export the entire home directory, however, I prefer to limit the scope of what is accessible from NFS only to data I need.

One of the approaches is to create a separate volume for work data. Thanks to the APFS file system, this operation is easy and non-invasive for the disk. Moreover, you can encrypt the new volume by the independent password to secure your sensitive, work-related data.

This step is optional. If you want, you can export your entire user directory.

Open the Disk Utility tool. Then, add a new volume. There’s no need to choose the size of the volume unless you want to reserve or limit the accessible space.

Disk Utility window shows creation of new APFS encrypted volume.
Disk Utility window where you can create a new APFS encrypted volume.

As I said before, I recommend using an encrypted APFS volume. It may reduce the risk of data theft. Of course, you should remember about unmounting volume when you finish work.

Last thing: move your projects to this new volume.

Enable NFS

I found a little script which makes the whole NFS configuration quick and easy. Credits go to this guy.

#!/usr/bin/env bash

OS=`uname -s`

if [ $OS != "Darwin" ]; then
  echo "This script is OSX-only. Please do not run it on any other Unix."
  exit 1
fi

if [[ $EUID -eq 0 ]]; then
  echo "This script must NOT be run with sudo/root. Please re-run without sudo." 1>&2
  exit 1
fi

echo ""
echo " +-----------------------------+"
echo " | Setup native NFS for Docker |"
echo " +-----------------------------+"
echo ""

echo "WARNING: This script will shut down running containers."
echo ""
echo -n "Do you wish to proceed? [y]: "
read decision

if [ "$decision" != "y" ]; then
  echo "Exiting. No changes made."
  exit 1
fi

echo ""

if ! docker ps > /dev/null 2>&1 ; then
  echo "== Waiting for docker to start..."
fi

open -a Docker

while ! docker ps > /dev/null 2>&1 ; do sleep 2; done

echo "== Stopping running docker containers..."
docker-compose down > /dev/null 2>&1
docker volume prune -f > /dev/null

osascript -e 'quit app "Docker"'

echo "== Resetting folder permissions..."
U=`id -u`
G=`id -g`
sudo chown -R "$U":"$G" .

echo "== Setting up nfs..."
{ZAZNACZ TE LINIE} LINE="/Volumes/Work -alldirs -mapall=$U:$G localhost"
FILE=/etc/exports
sudo cp /dev/null $FILE
grep -qF -- "$LINE" "$FILE" || sudo echo "$LINE" | sudo tee -a $FILE > /dev/null

LINE="nfs.server.mount.require_resv_port = 0"
FILE=/etc/nfs.conf
grep -qF -- "$LINE" "$FILE" || sudo echo "$LINE" | sudo tee -a $FILE > /dev/null

echo "== Restarting nfsd..."
sudo nfsd restart

echo "== Restarting docker..."
open -a Docker

while ! docker ps > /dev/null 2>&1 ; do sleep 2; done

echo ""
echo "SUCCESS! Now go run your containers 🐳"

This script removes your project and associated volumes (make sure you have all important data outside volumes), quits docker and configures the NFS server. Please take a look at line XX – it contains a path to the newly created volume.

If you want to export the whole home directory, you can change this line to /Users. If you use a macOS 10.15 (Catalina), the path to the home directory is different because Apple separated Data volume from System Volume (they made system data read-only) and it looks as follows:

/System/Volumes/Data/Users

The whole line should look like:

LINE="/System/Volumes/Data/Users -alldirs -mapall=$U:$G localhost"

Save this script, e.g. in your project directory, e.g. as setup_nfs_docker.sh, make it executable and run.

> sudo chmod +x setup_nfs_docker.sh
> ./setup_nfs_docker.sh

Check if NFS works properly

Once the NFS server was configured and started, you should be able to connect to it using the Finder. Select Go > Connect to Server... option from the menu and try to connect to your local server.

Finder's utility to connect to the remote server.
Finder’s utility to connect to the remote server. In this example, it’s the localhost.

You should see the content of your volume. In my case, I have a Projects directory.

Findow window showing the content of NFS with one directory Projects.
Finder window showing the content of NFS. In this case, I have only one directory called Projects.

Use NFS volume in docker-compose

It’s time to change the definition of docker-compose.yml. Open this file in your editor, add the volume section and use the newly defined volume in containers.

version: '3'

services:
    php-fpm:
        build:
            context: .
            dockerfile: .docker/php/Dockerfile
        volumes:
            - nfsmount:/app
            - .docker/php/php.ini:/usr/local/etc/php/php.ini:ro
    nginx:
        image: nginx:latest
        ports:
            - "8080:80"
        volumes:
            - nfsmount:/app
            - .docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
            
volumes:
    nfsmount:
                driver: local
        driver_opts:
            type: nfs
            o: addr=host.docker.internal,rw,nolock,hard,nointr,nfsvers=3
            device: ":${PWD}"

Now, you can start containers and see if everything is working correctly.

Tests

I haven’t done exhaustive tests to check how much the performance increased after these changes. Instead, I took into account my own requirements. The application I’m working on has to be fast enough to give me an outcome in seconds. It’s subjective, but I don’t need to have a production-like performance on my local machine – it’s only nice-to-have.

I did a couple of simple tests, where I randomly refresh one view of the React app. It uses API exposed from a container running on Docker for Mac. The React app is running on the host machine.

First, please take a look at the network panel in the browser to see how long I waited for the completion of requests.

Default volume (consistent)

Window from Chrome Developer Tools which presenting XHR connections when volume is mounted to the container using consistent option, which is default behavior.
XHR connections when the volume is mounted to the container using consistent option, which is the default behavior.

Using delegated option

Window from Chrome Developer Tools which presenting XHR connections when volume is mounted to the container using delegated option.
XHR connections when the volume is mounted to the container using the delegated option.

Using NFS

Window from Chrome Developer Tools which presenting XHR connections when volume is mounted to the container using NFS.
XHR connections when the volume is mounted to the container using NFS.

Using the delegated option has a big impact on the performance, however, it also has some caveats and you should be aware of them before you decide to use this option.

The app running on NFS volume has slightly better performance than the app running on a volume mounted with a delegated option. In some cases, it may speed up the application by seconds, leverage the overall developer experience to the satisfying level.

Please also be aware that osxfs file system (on both consisted and delegated option) supports the file system events. NFS volumes don’t have this functionality, so if you depend on it, NFS isn’t the right choice.

There is a comparison, how long I had to wait before views were loaded in the browser on a specific volume configuration.

Mount methodTime to load view 1 [s]Time to load view 2 [s]
Default volume21.6425.38
With delegated option9.9910.68
Using NFS6.577.59

Summary

It’s hard to say, but Docker For Mac has still some basic performance-related problems. By tuning configuration a bit (e.g. using delegated option), we can achieve better speeds, but it’s still far away from the native-feel that Linux offers without any hassle.

Using NFS may speed up application even more, but it’s not a panacea for docker-related performance issues in all projects. I should perform more tests to check if the NFS-powered volumes are suitable for big apps. Or maybe you have some experience with it?

About

I'm a software developer from Poland who helps others write better code and live better by showing, explaining and inspiring. Read more about me here.