Building a continuous integration workflow with gitlab and openshift 3

In this post I’ll go over building and testing a Docker image with gitlab CI and then pushing that image to Openshift 3. It should be somewhat helpful for people using other Docker solutions like Kubernetes too or CI solutions like Jenkins. I’m using Django for the project with some front end assets built in node.

Our goal is to have one docker environment used in development, CI, staging, and production. We’ll avoid repeating ourselves with image building.

At a high level my workflow looks like

Screenshot from 2016-04-01 15-12-12

Local development

All local development happens with docker compose. There is plenty of info on the matter so I’ll skip most of this. I will point out that I want to use the same python based docker image for development and later in production.

Continuous Integration

I’m using gitlab and gitlab CI runner to do testing and build a docker image. Gitlab has some docs on how to build a docker image. The choices are shell and docker-in-docker. I found docker-in-docker to be slow, complex, and error prone. In theory it would be better since the environments are more isolated.

Gitlab CI building images with shell executor

Here is my full .gitlab-ci.yml file for reference.

The goal here is to build the image, run unit tests, deploy on success, and always clean up. I had to run gitlab ci runner as root otherwise I would get permission errors. 😦

In a non trivial CI system, shell can get messy too. We need to be concerned about building too many images and filling all disk space, exhausting the number of docker network subnet pools, and ensuring concurrency works (if you need that).

Disk space – I suggest using a service that lets you attach a large volume that is formatted with an lot of inodes and using Docker’s overlayfs storage engine. I used AWS’s EC2 with a 120gb mount for /var/docker. See this blog post for details. Pay attention to the part where you define inodes. I went with 16568256.

Docker clean up – Gitlab has a docker image that can help clean up docker images and containers for you here. I’d also consider restarting the server at night and running your own clean up scripts too. I also place a CI cleanup stage like

stages:
  - build
  - test
  - deploy
  - clean

...
variables:
  PROJECT_NAME: myproject$CI_BUILD_REF
  COMPOSE: docker-compose -p myproject$CI_BUILD_REF

...
clean_docker:
  stage: clean
  when: always
  script:
    - $COMPOSE stop
    - $COMPOSE down
    - $COMPOSE rm -f

I’ve been using docker since 1.0 and I’m still always amazed by how it finds new ways of breaking itself. You may need to add your own hacks to seek and destroy docker images and containers that will want to build up forever.

The $CI_BUILD_REF is to ensure each docker image is unique – this allows us to run multiple builds and have some certainty the image being tested is the one being pushed to docker hub.

The test stages are rather django/node specific. Just place whatever code needs to execute to run tests here. If it gets a success exit code gitlab CI will know it passed.

Pushing to docker hub – I’m tagging my tested image, pushing it to docker hub, and running a webhook to notify openshift to automatically pull the image and deploy it to staging.

deploy_staging:
  stage: deploy
  only:
   - qa
  script:
   - echo "Tag and push ${PROJECT_NAME}_web"
    - docker tag ${PROJECT_NAME}_web ${IMAGE_NAME}qa
    - docker push ${IMAGE_NAME}:qa
    - "./bin/send-deploy-webhook.sh qa $DEPLOY_WEBHOOK_STAGING"

Notice how I’m only running this on the qa branch and that I’m tagging the image as “qa”. I’m using docker tags so that I can have one image that has different development stages – dev, staging, and production.

Openshift with docker build strategy

Openshift lets you build using a source to image strategy or docker. Source to image would mean rebuilding a docker image – which we already did in CI. So let’s not use that. The docker strategy was a bit confusing to me however. I ended up having a very minimal build stage using Openshift’s docker build strategy. Here is a snippet from my build yaml.

  source:
    type: Dockerfile
    dockerfile: "FROM thelab/tsi-cocoon:devnRUN ./manage.py collectstatic --noinput"
  strategy:
    type: Docker
    dockerStrategy:
      from:
        kind: DockerImage
        name: 'docker.io/user/image:dev'
      pullSecret:
        name: dockerhub
      forcePull: true

Notice the source type Dockerfile with a VERY minimal inline dockerfile that just gets the right image and collects static (Django specific – this could really just be FROM image-name)

The strategy is set to type: Docker and includes my docker image and the “secret” needed to pull the image from my private repo. Note that if you must specify the full docker registry (docker.io/ect) or else it will not work with a private registry. You need to add the secret using oc secrets new dockerhub .dockercfg=dockercfg where dockercfg is the file that might be under ~/.dockercfg.

forcePull is set to true so that openshift does a docker pull each time.

You’ll need to define deployment, services, ect in openshift – but I include that in the scope of this post. I switched a source to image build to docker based without having to touch anything else.

That’s it – the same docker image you used with compose locally should be on openshift. I set up a workflow where git commits on specific branches automatically deploy on openshift staging environments. Then I manually trigger the production deploy using the same image as staging.

By David

I am a supporter of free software and run Burke Software and Consulting LLC. I am always looking for contract work especially for non-profits and open source projects. Open Source Contributions I maintain a number of Django related projects including GlitchTip, Passit, and django-report-builder. You can view my work on gitlab. Academic papers Incorporating Gaming in Software Engineering Projects: Case of RMU Monopoly in the Journal of Systemics, Cybernetics and Informatics (2008)

Leave a comment