Explore Blog

How To Use Rundeck For Docker Builds & Deployments

Because the Movio development squads are generally working on a wide range of projects at the same time, we wanted to provide our developers with a single interface from which they could manage all of their builds and deployments, without the need to login to multiple tools with diverse looks. Although we could make this process fully automatic, triggered by events such as Git commits, we chose not to – issues like fixed deployment windows on the servers, concurrent testing running, or a range of others could make this impractical; we prefer to give the user the choice. In this post I’ll follow a top-down approach to Rundeck, first showing how the user goes through the process of building and deploying, then explaining all the steps and their underlying tools in detail.


Rundeck was chosen as our main interaction tool because of its simple interface, handy user management and easy integrations with tools such as AWS, Git, and Jenkins. The user is given two jobs in Rundeck: build service and deploy service. The diagram below shows the workflow of the steps involved.

Movio Rundeck Workflow

Build service

The picture below shows how users can trigger the Build service job in Rundeck – they choose the service to build, the Git branch, tag, or commit on which the service will be based, and the Docker tag under which the resulting container will be published to our Docker registry. Rundeck also saves the supplied Docker tag, so it can provide the user with a list of available tags later during deployment.

Screen Shot 2015-12-03 at 1.42.41 PM

The Build service job then triggers a Jenkins job by sending a request to a dynamically generated url. Our Jenkins jobs follow the same naming pattern, so the input parameters can be easily passed on from the Rundeck job as follows:


The triggered Jenkins job first builds the service artifact (zip, jar, etc.) then continues with a downstream job to build the Docker container. The downstream job places the service artifact on a path where the Dockerfile expects it, then builds the container around it. Once finished, the container is pushed to our Docker registry with the supplied tag, and the user is notified via a Slack message. Some of the builds might take a long time, so the Slack messages are useful if the user wants to switch to another activity in the meantime.

Deploy service

The second Rundeck job allows the user to deploy the Docker container. As shown in the image below, the user can choose which server(s) to deploy to, as well as the service name and tag, from two drop down lists. The drop down tag list is read for every service from a json file generated by the build job.

Screen Shot 2015-12-03 at 1.46.08 PM

Currently, we manage our servers through Puppet, thus the deploy job consists of two steps. First, the service tag is added (or updated) in Puppet’s hieradata for the chosen server(s). Second, Rundeck triggers a Puppet agent run on the servers so the new configuration gets applied. The user can see Puppet agent’s output directly in the triggered Rundeck deploy job. As soon as the agent run is finished they can start testing or, in case of production deployment, announcing their achievements.

Inside Puppet

The second step of the deployment process – the Puppet agent run – is obviously not as simple as it looks. We run our Puppet server with a number of plugins and one of them is r10k. Before every Puppet agent run, Rundeck triggers a sync through r10k which rereads all Puppet modules: hieradata (with all services’ tags), profiles (different server roles - DB, Web, App - need different configuration) and garethr-docker (for managing Docker) to name a few. After this, the configuration changes are applied on the server.

To give you a brief insight how the service tag gets passed through Puppet configuration, here are a few code snippets:


microservice1_tag: 1.0.1
microservice2_tag: 2.1.0

Server profile

We currently follow the standard Puppet roles and profiles pattern to classify servers. Profiles define the set of containers that run on a particular server, reading the tag from hieradata and calling the services’ manifests either through a loop (if the services have uniform deploy requirements) or individually by their names. The profile also holds an array of ports which get mapped to the containers. A simple service manifest call can be seen below:

$microservice1_tag = hiera('microservice1_tag', '')

services::service { "microservice1":
    service_name        => "microservice1",
    docker_tag          => $microservice1_tag,
    port                => $ports[microservice1],

Service manifest

The manifest uses the garethr-docker module to pull the correct container tag, create the service init scripts and start the service. To undeploy a service we can simply remove the microservice_tag from hieradata; Puppet will ensure the service is stopped and disabled:

define services::service (
  $service_name     = undef,
  $docker_tag       = undef,
  $port             = undef,
) {

  $image_name = "your.registry.com/${service_name}"

  if ! empty($docker_tag) {

    docker::image { $image_name:
    image_tag => $docker_tag,

    docker::run { $service_name:
    image   => "${image_name}:${docker_tag}",
    hostname => $service_name,
    ports   => ["${port}:80"],
  else {
    service { "docker-${service_name}":
    ensure => "stopped",
    enable => false,

Pros, cons, and future trends

The main advantages of this approach are its ease of use for developers, reproducibility, having a clear history of deployments with the possibility to easily see and revert any changes, and uniform deployment across all environments, with no surprises during production deployments. Puppet also supports ‘dry runs’, meaning that a user can see exactly what will change during a deployment. Through Rundeck’s user management we can also easily allow specific users to deploy to specific servers, and using Puppet to manage all configuration, whole servers can be rebuilt in a matter of minutes.

Disadvantages include the need for manual work during adding a new service, such as updating the array of mapped ports in Puppet, creating new builds in Jenkins, or updating Rundeck’s list of services. Additionally, you’ll need to run another container on every host which will route requests to the correct containers and their ports. This is quite easy, because the router can be deployed the same way as all the other services, but it is worth keeping in mind.

The biggest challenge that we've faced around Docker deployments is managing the auxiliary data services required for their run. New tools such as AWS ECS and Kubernetes provide more dynamic ways of deploying Docker containers, however the need to manage the auxiliary data is not mitigated by them. Services are usually not as simple as in my example, and they require configuration like usernames, passwords, DB connections, endpoints to communicate with, and JVM settings. What’s more, these settings differ across environments.

We seem to be at a crossroads tackling this challenge at Movio. Some of our deployments use Puppet templates and hiera variables to manage configuration files on the hosts and mount them into the containers, while others leverage passing environment variables to the containers and having Confd create the configuration files inside the containers. Even though templating large configuration files can become a lengthy process (especially during development, when the configuration often changes), the latter approach with more self-contained containers seems to be the future; it can much more easily take advantage of the new dynamic ways of Docker deployment. We’ll continue documenting our experiences with Docker, so make sure to follow this blog for further developments in our use of this versatile tool.

Read the rest of our Docker series:

Part One: How Movio squads are streamlining their development process with Docker 

Subscribe to our newsletter

Keep me
in the loop

Our monthly email update with marketing tips, audience insights and Movio news.