Local Drupal development using Docker

Drupal development using Docker

One of the biggest arguments for using Docker to develop your app is to have isolated environments for different setups(The classic case of two different versions of PHP for two projects). Even that is sometimes not convincing enough. I find Docker to be damn useful when building production parity on local(Think trying to reproduce a production bug on your local). If you've faced this problem and want to solve it, read on.

Enter Docker

I wanted to switch over to Docker when I first heard about it. It sounded like Docker would address all the problems posed by Vagrant, without compromising the benefits. I adopted Docker4Drupal, a comprehensive Docker based setup open sourced by the nice folks at Wodby. I even use Lando these days. Its quite handy if you're working on non-Drupal projects as well. Why would I spin my own Docker setup with all these wonderful tools around?

  • For one, I'd have the ability to tweak it wherever I want to. I can open the hood and debug if things go wrong.
  • I wanted the same configuration on both production and local. Lando was local only and Wodby uses some magic sauce on their production servers to make the same configuration run on production. I'd prefer to run the setup in my own servers and have the ability to see through the stack if things go wrong.
  • I'd like to think of this of this effort as the DIY drupal hosting docker installment ;) This is also an attempt to explain what happens behind the scenes for a container based Drupal setup, for developers who're curious about its workings.

With the rationale to "reinvent the wheel" out of the way, let's build our custom setup.

First, a humble docker-compose to get the setup up and running. As an aside, a docker-compose file is a declarative configuration(in YAML) of how your web stack should roll out as docker containers. We shall use v2 version of docker-compose as its more widespread. I'll update this post for a v3 version sometime in future.

We're building a LEMP stack, which involves a web app(PHP + Nginx) talking to a DB service(MariaDB). Docker compose serves as a single "source of truth" specification for the stack. Also, it builds all the associated containers in the same docker network so that its easier for the services to discover each other.

As we are particular about having the same configuration in both local and production, we use the concept of docker-compose inheritance to have the commonalities in one place and add env specific configuration in a respective file. Our setup consists of 3 containers,

  • A MariaDB image for the database
  • An Nginx image for the HTTP web server
  • A PHP image for serving PHP

Here's how the base compose version looks like.

version: "2"

services:
  mariadb:
    image: mariadb:10.3

  php:
    build:
      context: .
      dockerfile: ./deploy/php/Dockerfile

  nginx:
    image: nginx:alpine

We can add more optional containers like PHPMyAdmin if needed.

Choice of images

We have the option to use a pre-built Docker image or build our own. This is based on few considerations:

  • The degree of tweakability we want. For instance, we don't want to tweak the mariadb settings except for changing the credentials of the database. Hence, we use the official mariadb image.
  • Size of the image. Docker images can get big pretty fast and it makes sense to shave off anything we don't need. Alpine Linux is known for its small size containers and most official docker images for every stack have an alpine variant. We use those whenever we can, in our case, the Nginx image uses Alpine linux.
  • Support for updates. How quickly can you roll out a newer version of your container once there is a security release for PHP? This is arguably the most important consideration when choosing images. You're much better off choosing an OS/distribution which has good support for updating packages. I picked Debian for this reason. Besides, Debin/Ubuntu has an excellent developer ecosystem. You can also substitute this with RHEL and variants(like Centos/Fedora etc.) and get similar mileage. This image is built on a PHP FPM + Nginx stack, which I'm more familiar with than its cousin, the Apache php-mod setup. You can achieve similar setup with Apache php-mod as well.

Building the images

  1. PHP

    We start with `php:7.1-fpm` image as the base image. This strikes a good balance between bleeding edge and stability. Apart from installing the basic Drupal related dependencies, we install some binaries like Git, Wget and Composer. We also configure a working directory where the code gets injected, called /code. There is more room for improvement, like running the main processes like Nginx and PHP FPM as non root users. But that's the topic of a later blog post.

    Our PHP docker image looks like this,

    FROM php:7.1-fpm
    
    RUN apt-get update \
        && apt-get install -y libfreetype6-dev libjpeg62-turbo-dev libpng-dev wget mysql-client git
    
    RUN  docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
         && docker-php-ext-install gd \
         && :\
         && docker-php-ext-install pdo pdo_mysql opcache zip \
         && docker-php-ext-enable pdo pdo_mysql opcache zip
    
    # Install Composer
    RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
    
    WORKDIR /code
    

    Also, I'd prefer that search engines don't crawl my non production sites. I'd go with a .htaccess based approach for this, but it apparently [[][incurs a performance penality]]. I write some extra stuff in my dockerfile to address this.

    RUN if [ -n "$NON_PROD" ] ; then printf "User-agent: *\nDisallow: /\nNoindex: /\n" > ./web/robots.txt; fi
    
  2. mariadb

    There are 2 things to take care of when building the database containers.

    • Injecting database credentials and exposing them to other containers who want to use it,
    • Persisting databases even if containers are killed and restarted.

    For the former, we use a .env file which docker reads. This will supply all the environment variables needed to build our image. Here's how our .env will look:

    MYSQL_ROOT_PASSWORD=supersecretroot123
    MYSQL_DATABASE=drupaldb
    MYSQL_USER=drupaluser
    MYSQL_PASSWORD=drupaluserpassword123
    

    Let's tweak our local docker compose file to pick up these variables.

    services:
      mariadb:
        extends:
          file: docker-compose.yml
          service: mariadb
        environment:
          MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
          MYSQL_DATABASE: ${MYSQL_DATABASE}
          MYSQL_USER: ${MYSQL_USER}
          MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    

    Note the use of extends construct, which picks up everything else about the mariadb container from the base compose file and "extends" it. While we are at it, let's add persistence power to our database container, by mounting volumes. We map 2 volumes, one for storing the actual database data(which resides at /var/lib/mysql inside the container), another to supply init scripts to MySQL. The official MariaDB containers ship with a way to initiate the database with sample data. This path is at /docker-entrypoint-initdb.d. We will see that this is pretty useful to build production replicas of our Drupal site. Let's add those to the local compose file.

    services:
      mariadb:
        extends:
          file: docker-compose.yml
          service: mariadb
        environment:
          # ...
        volumes:
          - ./mariadb-init:/docker-entrypoint-initdb.d
          - ./mariadb-data:/var/lib/mysql
    
  3. Nginx config

    The Nginx container requires 2 things,

    • where the code resides inside the container.
    • the nginx configuration for running our Drupal site.

    Both these inputs can be supplied by mounting volumes.

    I picked up the Nginx configuration from here. I separated them into 2 files(one more generic and another file included in this which is specific to Drupal) to have a cleaner and more modular setup. This can be a single file as well.

    Here's how my Nginx container spec looks like for local compose file,

    nginx:
      extends:
        file: docker-compose.yml
        service: nginx
      volumes:
        - ./:/code
        - ./deploy/nginx/config/local:/etc/nginx/conf.d/default.conf
        - ./deploy/nginx/config/drupal:/etc/nginx/include/drupal
      ports:
        - '8000:80'
    

    Finally, I run Nginx in port 8000 and expose this from the container. Feel free to change 8000 to anything else you feel appropriate.

  4. First spin at local

    Before we boot our containers, we have to make a few small tweaks to our PHP setup. We mount the code directory inside PHP because the PHP FPM process requires it. Also, as a 12 factor app best practice, we expose DB specific details as environment variables.

    Here's how our PHP container spec looks on local compose.

    php:
      extends:
        file: docker-compose.yml
        service: php
      volumes:
        - ./:/code
      environment:
        PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
        PHP_FPM_CLEAR_ENV: "no"
        DB_HOST: mariadb
        DB_USER: ${MYSQL_USER}
        DB_PASSWORD: ${MYSQL_PASSWORD}
        DB_NAME: ${MYSQL_DATABASE}
        DB_DRIVER: mysql
        NON_PROD: 1
    

    Let's update our settings.php to read DB credentials from environment variables.

    $databases['default']['default'] = array (
        'database' => getenv('DB_NAME'),
        'username' => getenv('DB_USER'),
        'password' => getenv('DB_PASSWORD'),
        'prefix' => '',
        'host' => getenv('DB_HOST'),
        'port' => '3306',
        'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
        'driver' => 'mysql',
    );
    

    NOTE the settings.php file, by deliberate design is blacklisted by Git from being checked in as it might contain sensitive information about your environment like passwords or API keys. For this setup to work, you will have to checkin the settings.php file, and doubly ensure that it does not contain any sensitive information. If this is the case, you inject them into your app using environment variables like how we did for the DB credentials above.

  5. Booting our app

    Let's boot our full docker stack.

    $ docker-compose -f local.yml up --build -d
    

    To check the logs, run,

    $ docker-compose -f local.yml logs <container-name>
    

    The app can be accessed at localhost:8000 or whatever port you supplied for Nginx in local compose file. Make sure you run composer install before setting up Drupal.

    $ docker-compose -f local.yml run php composer install
    

    To run drush, you have to supply the full path of the drush executable and the root directory where Drupal is installed.

    $ docker-compose -f local.yml run php ./vendor/bin/drush --root=/code/web pm-list
    

    If you add /code/vendor/bin to PATH when building the container and create a drush alias with /code/web as the root directory, then we can run drush in a more elegant manner, but that's totally optional.

    Finally, to stop the setup, we run

    $ docker-compose -f local.yml down
    

    That's pretty much how you run Drupal 8 on Docker in your local. We shall see how to translate this into a production setup in the next installment.

Subscribe and download code

Want to download the code used in this blog post? Please subscribe to my newsletter in which I send my latest posts and tutorials.
Subscribe