Continuous Integration with Jenkins, Docker and Compose

Oxford University Press (OUP) recently started the Oxford Global Languages (OGL) initiative which aims at providing language resources for digitally under represented languages. In August 2015 OUP launched two African languages websites for Zulu and Northern Sotho. The backend of these websites is based on an API retrieving data in RDF from a triple store and delivering data to the frontend in JSON-LD.

The entire micro-service infrastructure for development, staging, and production runs on Docker containers in Amazon EC2 instances. In particular, we use Jenkins to rebuild the Docker image for the API based on a Python Flask application and Docker Compose to orchestrate the containers. A typical CI workflow is as follows:

– a developer commits code to the codebase
– Jenkins triggers a job to run unit tests
– if the unit tests are successful, the Docker image of the Python Flask application is rebuilt and the container is restarted via Docker Compose
– if the unit tests or the Docker build failed, the monitor view shows the Jenkins jobs in red and displays the name of the possible culprit who broke the build.

Here is a live demo of the Continuous Integration workflow, sample code is available on GitHub.

9 thoughts on “Continuous Integration with Jenkins, Docker and Compose”

  1. I think you should mention to plugin jenkins that you have installed ex: git plugin, ShiningPanda Plugin to build env to run test

    1. Thanks for your feedback. Here is a list of the most important Jenkins plugins I used in this project:

  2. Hi Sandro,

    Thank you for your post. I am building the CI Pipeline using Jenkins and Docker for my products which is having only 5 components. How do you mange the dependencies between components in order to automate the CI Pipeline? Or how you automate the CI Pipeline? How you define the “good version” of docker image in order to run the integration test?

    For example, dev A changes in component A then Jenkins will trigger the component A pipeline so a new docker image of component A has been built. At the same time, dev B makes some changes in component B then Jenkins will trigger component B pipeline and build new docker image for Component B. Now, I want to run the integration test against the changes in component A, I will go to the Integration Pipeline but I dont know how to get the “good version” of component B because the new version of component B has been built.

    Please could you share your experience? or give me any advices.

    Many thanks

    1. Thanks for your comment and sorry for my late reply. Thanks also for providing an example, it’s easier to suggest a solution with a working example.

      A typical CI workflow for your example could work like this:

      1. The initial state is a development environment running component A1 and B1.
      2. Developer A pushes a change to a code repository (for example git).
      3. git notifies Jenkins of the code change via a webhook.
      4. Jenkins receives the webhook from git and triggers a pipeline job that does the following:
      4.1 Build the new Docker image for component A (let’s call it A2).
      4.2 If the Docker image is built successfully, it gets pushed to a Docker Registry and/or deployed to the development environment. The development environment now runs component A2 and B1.
      4.3 Integration tests are immediately executed against the current development environment running components A2 and B1.
      5. The cycle restarts from point 2 with developer B pushing changes to the code repository for component B.

      Note that Jenkins pipeline job in step 4 needs to be set up so that it does not allow concurrent builds (there is an option for this in Jenkins pipeline jobs under General -> Do not allow concurrent builds). This prevents Jenkins to run two pipeline jobs at the same time so that you can be sure that integration tests are executed first against the components A2-B1 and then against A2-B2. In this way you can also identify the developer who broke the build (and ideally notify him/her directly from Jenkins).

      As an optional step after 4, you can have a promotion mechanism to deploy the healthy Docker image to a staging environment (either automatically or manually) so that you have a stable staging environment and a development environment that developers can break and debug.

      More info on Jenkins pipelines (introduced with Jenkins 2 and not available at the time I wrote this blog post) are here.

      Shameless plug: I recently published a video course on Jenkins. The forth section (Pipeline as Code) shows how to implement such a workflow using GitHub, Docker, and Jenkins.

      Hope this helps…


Leave a Reply

Your email address will not be published. Required fields are marked *