Skip to main content

Build, Test and Deploy your Laravel Vapor app with Docker and GitLab CI/CD

GitLab has offered CI/CD as part of its service for a few years now, but it is only recently that I have come to utilise its full potential for building a custom docker image that is used in both testing and deployment.

Docker is great for maintaining a consistent development environment across multiple machines, developers and environments. Docker can even be used to host apps on production and staging environments too making for a truly consistent environment. For this post I will focus on setting up docker containers for use with Laravel Vapor in local development and for the CI/CD process.

First, Docker

First things first, we need a local development environment that we can use to run our Laravel App. With Docker, you create a container for each service. Eg; web server, database, runtime and any other services needed to run the app such as a cache.

First we will need a docker-compose.yml file in the root of our project to instruct Docker on the various containers we need to run our app. Here we will be using PHP 7.2, Nginx, MariaDB and Redis.

version: '3'

  # The Application
    container_name: ${APP_ID}
      context: ./docker/php-fpm
    working_dir: /var/www
      - ./:/var/www

  # The Web Server
    container_name: ${APP_ID}_nginx
      context: ./docker/nginx
    working_dir: /var/www
      - ./:/var/www
      - 80:80

  # The Database
    container_name: ${APP_ID}_db
    image: mariadb
      - dbdata:/var/lib/mysql
      - 3306:3306

  # The Cache
    container_name: ${APP_ID}_cache
    image: redis


I will not go through all of this file in detail. But there are a few things to note here. First we have a bunch of variables that are being used in our docker-compose.yml, these are pulled from our projects .env file. Most of them will already be set by Laravel, but you will need to add APP_ID with a snake case version of your app name. Eg; my_project. I do this so that I have a unique and easily recognisable name for each project and container without the need to update this file in the future.

You will also need to update the DATABASE_HOST and REDIS_HOST values in your .env file to use their respective containers, which in this instance would be; my_project_db or my_project_cache instead of an ip address or localhost.

The web server and database are being exposed on their traditional ports. So to see the app you should only need to visit localhost in your browser. To access the database you should be able to login through a GUI on port 3306.

Next, you will notice that we’re not pulling images for PHP or NGINX. But instead giving a context to a relative path. This is telling Docker to use a Dockerfile to create the container.

I like to create a docker folder in the root of my app to hold the Dockerfiles needed to create any custom containers. Inside that I create a folder for each service.

 ┣ 📂nginx
 ┃ ┣ 📜default.conf
 ┃ ┗ 📜Dockerfile
 ┗ 📂php-fpm
 ┃ ┗ 📜Dockerfile

This is my Dockerfile for PHP

FROM php:7.2-fpm

# -- install general
RUN apt-get update && apt-get install -y apt-transport-https ca-certificates gnupg git wget zip zlib1g-dev automake libpng-dev

# -- install php modules: mysqli pdo pdo_mysql
RUN docker-php-ext-install mysqli pdo pdo_mysql zip gd
RUN pecl install redis && docker-php-ext-enable redis
RUN pecl install xdebug && docker-php-ext-enable xdebug

# -- install node
RUN curl -sL | bash -
RUN apt-get install -y nodejs

# -- install composer
RUN php -r "copy('', 'composer-setup.php');"
RUN php composer-setup.php --install-dir=/usr/local/bin --filename=composer --quiet
RUN echo 'export PATH="$PATH:/root/.composer/vendor/bin:/var/www/vendor/bin"' >> /root/.bashrc

Here I am pulling the latest image of PHP-fpm 7.2 and installing all of the necessary extensions to run and maintain the app. The PHP container is the only container used in our CI/CD process, which is why Node is needed for compiling our front-end assets during deployment.

The NGINX Dockerfile is fairly straightforward.

FROM nginx

COPY default.conf /etc/nginx/conf.d/

The only reason for using a custom Dockerfile here is so we can use our own config file, which I have placed in the same folder.

upstream php {
    server phpfpm:9000;

server {
    listen       80;

    # this is needed otherwise php cant find its files
    root /var/www/public;

    error_log  /var/log/nginx/error.log;
    access_log /var/log/nginx/access.log;

    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Content-Type-Options "nosniff";

    index index.html index.htm index.php;

    charset utf-8;

    client_max_body_size 100M;

    location / {
        try_files $uri $uri/ /index.php?$query_string;

    location = /favicon.ico { access_log off; log_not_found off; }
    location = /robots.txt  { access_log off; log_not_found off; }

    error_page 404 /index.php;

    location ~ \.php$ {

        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass php;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;

        fastcgi_intercept_errors off;
        fastcgi_buffer_size 16k;
        fastcgi_buffers 4 16k;
        fastcgi_connect_timeout 300;
        fastcgi_send_timeout 300;
        fastcgi_read_timeout 300;

    location ~ /\.(?!well-known).* {
        deny all;

This is a common config for Laravel on NGINX, the only difference is we’re referencing our php-fpm container so that NGINX can send requests to it.

Now we have the Docker environment setup, in the root of our project we should be able to execute docker-compose up -d and the various containers will build and run. The first run will take a while as it needs to build everything from scratch, but subsequent calls will be much quicker.

Note: The -d is to tell docker to detach from your terminal so that you can use it for other things. You can omit this if you want to see the output, but cancelling out will also terminate the process.

Now if everything has gone to plan, we should have a local development environment and you should be able to see your app in action by visiting localhost in your browser.

GitLab CI/CD with PHPUnit and Vapor

If you’re already familiar with Vapor CLI you will know that you can build and deploy your app from the command line. But this can get a bit tedious and doesn’t lend itself well to a CI/CD process. Ideally, we want to be able to push our changes to our repo and automate the testing and deployment stage.

Let’s first create a .gitlab-ci.yml file in the root of our project with the stages we need.

  - build
  - test
  - deploy

  image: docker:19.03.1
  stage: build
    - docker:19.03.1-dind
    - docker pull $CI_REGISTRY_IMAGE:latest || true
    - docker build --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:latest ./docker/php-fpm/
    - docker push $CI_REGISTRY_IMAGE:latest

  image: $CI_REGISTRY_IMAGE:latest
  stage: test
    - cp .env.example .env
    - composer install
    - php artisan key:generate
    - php vendor/bin/phpunit

  image: $CI_REGISTRY_IMAGE:latest
  stage: deploy
    - composer install
    - php vendor/bin/vapor deploy production --commit="$CI_COMMIT_SHA"
    - master

  image: $CI_REGISTRY_IMAGE:latest
  stage: deploy
    - composer install
    - php vendor/bin/vapor deploy staging --commit="$CI_COMMIT_SHA"
    - develop

Luckily, thanks to our previous efforts above with Docker and the efforts of GitLab CI/CD, our build process is already done! The build stage will create our custom PHP-fpm Docker image, tag it as the latest image and store it for later use. There is nothing we need to do here and our image will be used for both testing and deployment. Even better, if you need to make any changes to the Dockerfile in the future, it will build a fresh image for us and use that before any other stage.

The next stage is testing, this will use our PHP-fpm image from the build stage, setup the app and run our projects tests. Again, not much to do here, if you’re using the standard Laravel setup for your unit tests everything should run as expected. However, we do not have a database container running, so if you have any tests that use the db you will need to make sure that your phpunit.xml is configured to use sqlite.

<server name="DB_CONNECTION" value="sqlite"/>

I also like to run tests that use the db in memory as it tends to be a bit faster.

<server name="DB_DATABASE" value=":memory:"/>

Finally, if our tests have passed we’re ready to deploy our branch to Vapor. But before we can do that, we will need to set our Vapor API key in GitLab so that Vapor will allow the deployments from GitLab. Create your key in the Vapor UI and then in your GitLab repo under Settings > CI/CD > Variables, create a new variable called VAPOR_API_TOKEN with your newly created key as the value.

Now, whenever you push/merge into your develop or master branches, the pipeline will trigger, building a Docker image, test your codebase and deploy to either staging or production in Vapor!