Part 5 – Create a good pipeline for CI/CD
After prepared our Drone instance, it’s time to build our pipeline flow.
This post doesn’t want to be a tutorial about how to create the perfect pipeline, because everyone has different needs.
There are lots of resources on the net about how to write a pipeline, so the focus here it’s share our experience to arrive to a good pipeline for us.
We started from the most important thing, the BACKEND.
First steps to prepare Commit/Build/Test/Deploy
A pipeline for a CI/CD flow has usually 5 different steps:
- Commit
- Build
- Test
- Stage
- Deploy
Drone looks for a special .drone.yml
file in the root of your repository for the pipeline definition.
In this file you can create multiple pipelines, and in our case we create a pipeline for every step of a CI/CD flow.
The top-level steps
section of a pipeline defines sets of steps that are executed sequentially. Each step starts a new docker container that includes a clone of your repository, and then runs the contents of your commands section inside it.
After you activate a repository inside drone, every single commit, tag, or pull request will trigger a new job that execute all the pipelines defined in your .drone.yaml file.
You can also choose to run these pipelines in parallel or sequentially, based on different trigger configurations.
Pipeline blocks in detail
Our final drone.yml it’s a bit long, so I’m going to explain every pipeline block.
Notify Pipeline Start
It could be not necessary, but for us it’s very important to have a notification on a slack channel when a build starts, and when it ends.
---
kind: pipeline
name: notify-pipeline-start
steps:
- name: slack
image: plugins/slack
settings:
webhook:
from_secret: SLACK_WEBHOOK
channel: drone-io
link_names: true
template: >
{{#if build.pull }}
*Build started*: {{ repo.owner }}/{{ repo.name }} - <https://github.com/{{ repo.owner }}/{{ repo.name }}/pull/{{ build.pull }}|Pull Request #{{ build.pull }}>
{{else}}
*Build started: {{ repo.owner }}/{{ repo.name }} - Build #{{ build.number }}* (type: `{{ build.event }}`)
{{/if}}
Commit: <https://github.com/{{ repo.owner }}/{{ repo.name }}/commit/{{ build.commit }}|{{ truncate build.commit 8 }}>
Branch: <https://github.com/{{ repo.owner }}/{{ repo.name }}/commits/{{ build.branch }}|{{ build.branch }}>
Author: {{ build.author }}
<{{ build.link }}|Visit build page ↗>
The functionality comes from an official drone plugin and the only mandatory fields that you have to set are the webhook url and the channel where drone will publish.
We also added a custom template of the message, so the result looks like this:

Build docker image
This is one of the longest part of the pipeline. Let me explain why.
The operation is very easy, there is an official drone plugin where you set repo, tag, and credentials and it does all the job for you.
But, we need some check for avoid errors, and usually what we build for production is a different image with a git tag, because production releases is related to GIT releases. So here is what we did:
---
kind: pipeline
name: build-docker-image
steps:
- name: build-docker-image-branch
image: plugins/docker
settings:
repo: ${DRONE_REPO,,}
tags:
- ${DRONE_SOURCE_BRANCH/\//-}
- ${DRONE_SOURCE_BRANCH/\//-}-${DRONE_COMMIT_SHA:0:8}
cache_from:
- ${DRONE_REPO,,}:${DRONE_SOURCE_BRANCH/\//-}
username:
from_secret: DOCKER_USERNAME
password:
from_secret: DOCKER_PASSWORD
when:
event:
exclude:
- tag
This is the first step of the build pipeline, it’s triggered for all branches, but not for git tag events. This step will build two different image tags: one with the branch name, and another with branch name and the commit SHA.
Some tips:
-
${DRONE_REPO,,}
the double comma is used for put the repo name in lowercase, docker doesn’t support uppercase for it. ${DRONE_SOURCE_BRANCH/\//-}
here we added a regex for convert/
with-
-
${DRONE_COMMIT_SHA:0:8}
we take only first 8 chars of the COMMIT SHA.
The result is that a git repo Leen15/simple-drone
with a branch feature/drone-ci
is converted to a docker image like this: leen15/simple-drone:feature-drone-ci-a1b2c3fa
What is the reason for create two tags? It’s simple, the cache.
Create two different tags allow us to:
- During the build, we can use the
cache_from
option for use docker layers of last build as a cache, speeding up the current build - During the deploy, we can use the tag with the commit, avoiding all issues in k8s about the not refreshing of a image with same tag of the previous one, and we can have a reference about what is the specific version running because we have the commit sha in the tag image.
With the last step, we create a build per every branch, but for production we want to be sure that we create a unique production image from the TAG RELEASE of GIT.
We have also to be sure that the TAG RELEASE comes from the MASTER branch, so we have 2 different checks that run only in the event of a TAG.
First check, compare the commit of git tag with commits of master branch:
- name: Fetch full git repo
image: docker:git
commands:
- git fetch --all
when:
event:
- tag
- name: check-master-commit
image: uala/drone-rancher-deploy
settings:
enforce_branch_for_tag: master
action: tag_check
when:
event:
- tag
The first step downloads the full repo, so we can check all commits of all branches, and the second step use our open source plugin drone-rancher-deploy that has a specific function for this operation: It will check if the current COMMIT exists in the branch set in the enforce_branch_for_tag
field.
With this check we avoid that somebody can create a tag from a branch that is not MASTER and deploy it in production.
Second check, look for a docker image for master branch and same commit:
- name: check-master-image
image: ellerbrock/alpine-bash-curl-ssl
commands:
- echo "Running on agent $DRONE_MACHINE"
- URL=https://registry.hub.docker.com/v1/repositories/$REPO/tags/$MASTER_TAG
- CURL_RESPONSE=$(curl --silent -u $DOCKER_USER:$DOCKER_PASS $URL)
- echo -e "\e[34mCheck if master image $REPO:$MASTER_TAG exists..."
- test "$CURL_RESPONSE" == 'Tag not found' && (echo -e "\e[31mMaster image does not exists\e[0m"; exit 1)
- echo -e "\e[32mMaster image OK\e[0m"
- exit 0
environment:
REPO: ${DRONE_REPO,,}
MASTER_TAG: master-${DRONE_COMMIT_SHA:0:8}
DOCKER_USER:
from_secret: DOCKER_USERNAME
DOCKER_PASS:
from_secret: DOCKER_PASSWORD
when:
event:
- tag
This is a bash script that ask to dockerhub if exists a master image with the same COMMIT SHA and continue the pipeline only if it’s true:

Last step of build, create a docker image for the production tag:
- name: build-docker-image-tag
image: plugins/docker
settings:
repo: ${DRONE_REPO,,}
tags:
- ${DRONE_TAG/\//-}
cache_from:
- ${DRONE_REPO,,}:master-${DRONE_COMMIT_SHA:0:8}
username:
from_secret: DOCKER_USERNAME
password:
from_secret: DOCKER_PASSWORD
when:
event:
- tag
As you can see, it use the cache from the master image for generate the tag image, so it’s fast.
Test our code
Our backend is based on Ruby On Rails, so for every push we run two different checks, Rubocop and Rspec:
---
kind: pipeline
name: rubocop
trigger:
event:
exclude:
- tag
steps:
- name: rubocop
image: ${DRONE_REPO,,}:${DRONE_SOURCE_BRANCH/\//-}-${DRONE_COMMIT_SHA:0:8}
commands:
- echo "Running on agent $DRONE_MACHINE"
- bundle exec rubocop --parallel
depends_on:
- build-docker-image
image_pull_secrets:
- dockerconfigjson
---
kind: pipeline
name: rspec
trigger:
event:
exclude:
- tag
steps:
- name: rspec
image: ${DRONE_REPO,,}:${DRONE_SOURCE_BRANCH/\//-}-${DRONE_COMMIT_SHA:0:8}
environment:
[...]
commands:
- echo "Running on agent $DRONE_MACHINE"
- cp -f config/database.ci.yml config/database.yml
- bundle exec rake parallel:create parallel:load_schema parallel:prepare --trace
- bundle exec rake parallel:spec[3]
services:
- name: redis
image: redis
- name: postgres
image: postgres
depends_on:
- build-docker-image
image_pull_secrets:
- dockerconfigjson
They are two different pipeline blocks that run in parallel and they use as base image what we built in last pipeline.
Some notes here:
Both pipelines have a depend_on
option, this is used for run tests only after the build-docker-image pipeline is completed
Both pipelines have a trigger
that EXCLUDE the execution for tag events, we don’t need it because when we create the tag release all past tests had already executed for the master branch.
There is a special option image_pull_secrets
. This is very important because in the dockerconfigjson
secret we have to put a docker auth used for pull private images from the dockerhub.
The secret has to be set as:
dockerconfigjson={"auths": {"https://index.docker.io/v1/": {"auth": "XXXXXXXXXX"}}}
Where the auth key can be generated with this command:
$ echo -n 'docker_username:docker_password' | base64
Deploy!
Ok, we have our image, we tested the code inside it, and we are ready to deploy. But how?
If you search on google, today every CI/CD tool has a plugin that allow to deploy on kubernetes.
For do this operation, the CI/CD tool should stay on the same cluster, and must have full access to deploy what it wants. You can think what can happen if a compromised jenkins plugin has access to this…..
So, what we want to do?
- We want to limit who can deploy a specific branch / pipeline
- We want that every branch can be deployed on a specific namespace and workload, but with limitations
- We want to be independent from the cluster where the CI/CD tool is
Fortunately we use Rancher, and this give us a big advantage: we can control who can deploy what and with no dependency on the cluster, because a rancher user can deploy on any cluster inside it (with the right access).
The only thing missing is a plugin that allow us to deploy with rancher, and with good granularity on branches and environments.
In Uala, we have 10 test environments, and usually during the development a branch is attached to one of this environment, so everyone (the dev, testing team and the PM) can test the current state of the development.
We developed a new plugin, a “rancher deployer” that allow us to deploy in any k8s cluster using rancher2.
How it works?
This plugin uses rancher API for access to rancher using specific credentials, it selects the rancher project, and deploy the image that we want, in the workspace that we want.
The first thing to do, it’s create a user in rancher:

In the permission part, you can select “Custom” and keep all unflagged.
Now select the project where we want this user can deploy, and we add it as a member:

Keep attention here, you have to select ONLY the “Manage Workloads” ROLE, this is the minimum role for allow our user to deploy something.
Now you have to login in rancher with this new user, and you will should see the project enabled.
There will be only workloads, no ingress, services, volumes, secrets or other.
Go at the top right menu and select “API & Keys”.
Here we can create the credentials for deploy with drone:


Now you have your access_key and secret_key to use. Keep in mind that with these credentials you can deploy to all projects that you add to this user (you cannot create a key “by project”).
It’s the time to use our deployer, so go to drone.yml file and add a new pipeline:
---
kind: pipeline
name: deploy
trigger:
branch:
exclude:
- master
steps:
- name: rancher-deploy
image: uala/drone-rancher-deploy
settings:
config: k8s-envs.yml
# dry_run: true
logging: info
environment:
RANCHER_ACCESS_KEY_ENV:
from_secret: RANCHER_ACCESS_KEY
RANCHER_SECRET_KEY_ENV:
from_secret: RANCHER_SECRET_KEY
RANCHER_ACCESS_KEY_PRODUCTION:
from_secret: RANCHER_ACCESS_KEY_PRODUCTION
RANCHER_SECRET_KEY_PRODUCTION:
from_secret: RANCHER_SECRET_KEY_PRODUCTION
depends_on:
- build-docker-image
As you can see, there is a trigger that EXCLUDE the master branch, because we want to deploy only test branches and the production deploy comes from a GIT TAG.
We have to set only two things:
- our rancher credentials (we have 2 credentials, one for test envs, and one for production)
- a k8s-envs.yml (this is the key!)
This plugin deploys a branch to a specific environment, but how can it know what to do? it use a yml file:
# List of downcase enviroment map from environment to list of services
# Reusable configuration
.env-config: &env-config
branch: none
server_url: "https://your.rancher.url"
image: "<%= "#{ENV['DRONE_REPO'].downcase}:#{ENV['DRONE_SOURCE_BRANCH'].to_s.gsub(/\/+/, '-')}-#{ENV['DRONE_COMMIT_SHA'].to_s[0, 8]}" %>"
access_key: <%= ENV['RANCHER_ACCESS_KEY_ENV'] %>
secret_key: <%= ENV['RANCHER_SECRET_KEY_ENV'] %>
services:
- admin
- web
develop:
<<: *env-config
branch: feature/drone-ci
project: ENV-Develop
namespace: backend-env-develop
production:
<<: *env-config
only_tags: true
branch: RELEASE_TAGS_ARE_DEPLOYED_ON_THIS_ENV
project: ENV-Production
namespace: backend-env-production
image: "<%= "#{ENV['DRONE_REPO'].downcase}:ci-#{ENV['DRONE_TAG'].to_s.gsub(/\/+/, '-')}" %>"
access_key: <%= ENV['RANCHER_ACCESS_KEY_PRODUCTION'] %>
secret_key: <%= ENV['RANCHER_SECRET_KEY_PRODUCTION'] %>
services:
- admin
- web
- worker
- scheduler
env1:
<<: *env-config
branch: my-cool-branch # Regepx may be used here
The file is really simple: there is a env-config that is a base configuration. Here you can insert all data shared by all environments, for example we set:
- server_url : the domain url of rancher instance
- image: the format of the docker image to use
- access_key and secret_key: credentials for all test environments
- services: standard workloads that exists in every environment that we want to upgrade with the new docker image
Then, there is a section for every environment, for example our develop environment uses all standard configurations, so we have only to set:
- branch: the branch associated to this environment. The plugin updates all environments with the current branch
- project: the rancher project of this environment (the credentials used must be able to access it)
- namespace: the namespace where services (workloads) are
The production environment is very similar, but we set:
- only_tags: true THIS IS VERY IMPORTANT, because we tell to the plugin that it doesn’t check the branch parameter, it will deploy on this environment if it detects a TAG event
- image: we force the docker image format to the tag one
- access_key and secret_key: credentials for the production environment
- services: a different list of workloads to deploy (in production there are more workloads for traffic balancing)
As a result, when the plugin runs, it looks for all environments that match the current branch name, it logins with rancher credentials, select the rancher project and updates every workload of the service list:

Notify Pipeline End
The pipeline is completed, we need only to notify slack about the result:
---
kind: pipeline
name: notify-pipeline-end
steps:
- name: slack
image: plugins/slack
settings:
webhook:
from_secret: SLACK_WEBHOOK
channel:
from_secret: SLACK_CHANNEL
link_names: true
template: >
{{#if build.pull }}
*{{#success build.status}}✔{{ else }}✘{{/success}} {{ uppercasefirst build.status }}*: {{ repo.owner }}/{{ repo.name }} - <https://github.com/{{ repo.owner }}/{{ repo.name }}/pull/{{ build.pull }}|Pull Request #{{ build.pull }}>
{{else}}
*{{#success build.status}}✔{{ else }}✘{{/success}} {{ uppercasefirst build.status }}: {{ repo.owner }}/{{ repo.name }} - Build #{{ build.number }}* (type: `{{ build.event }}`)
{{/if}}
Commit: <https://github.com/{{ repo.owner }}/{{ repo.name }}/commit/{{ build.commit }}|{{ truncate build.commit 8 }}>
Branch: <https://github.com/{{ repo.owner }}/{{ repo.name }}/commits/{{ build.branch }}|{{ build.branch }}>
Author: {{ build.author }}
Duration: {{ since build.created }}
<{{ build.link }}|Visit build page ↗>
depends_on:
- build-docker-image
- rubocop
- rspec
- deploy
trigger:
status:
- success
- failure
Here we have a custom template as at the begin, but the most important things are the last two parts:
- depends_on: we have to wait for the execution of all previous pipelines, so this pipeline block will run only when all is done
- trigger status: by default, a pipeline is triggered only on success, but we have to log also if the pipeline fails, so we have to explicit declare both trigger statuses
Secrets and Security
The .drone.yml file is complete, our pipeline works and the deploy is ok. What are we missing? The security.
In theory, we set rancher production credentials in the deploy plugin, so anybody could change the k8s-envs.yml file and deploy a malicious branch in production. How can we avoid it?
Drone UI has a section where you can set all your secrets to use in the pipeline, but you don’t have the control about how the user will use these credentials in pipelines.
Fortunately, there is a great plugin that allows to read secrets from Kubernetes.
Using this plugin, we have two advantages:
- we can centralize secrets for different repositories and pipeline, without the need to replicate them for every one
- we can CONTROL the access to these secrets, with various options
For use this plugin, we need to deploy a new workload that allows Drone to read secrets:

You have to set the KUBERNETES_NAMESPACE env with your current namespace and generate a SECRET_KEY to use for allow drone server (and agents) to comunicate with this plugin. You can generate it as described in the previous post.
By default this workload cannot access secrets, so you have to create a RBAC for allow to read them:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-secrets
namespace: drone
subjects:
- kind: ServiceAccount
name: default
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
You can run this code directly from rancher using the “Import YAML” button.
With this code we create a Cluster Role that allows to read secrets, and then we associate this role to the “default” ServiceAccount of the namespace “drone”.
What we can do now?
First of all, we can centralize our secrets:

And then we can use them in our .drone.yml declaring secrets at the end of the file:
---
kind: secret
name: SLACK_WEBHOOK
get:
path: slack-webhook
name: backend_url
---
kind: secret
name: SLACK_CHANNEL
get:
path: slack-webhook
name: backend_channel
---
kind: secret
name: DOCKER_USERNAME
get:
path: docker-credentials
name: username
---
kind: secret
name: DOCKER_PASSWORD
get:
path: docker-credentials
name: password
---
kind: secret
name: dockerconfigjson
get:
path: docker-credentials
name: dockerconfigjson
---
kind: secret
name: RANCHER_ACCESS_KEY
get:
path: rancher-envs-credentials
name: access_key
---
kind: secret
name: RANCHER_SECRET_KEY
get:
path: rancher-envs-credentials
name: secret_key
---
kind: secret
name: RANCHER_ACCESS_KEY_PRODUCTION
get:
path: rancher-prod-credentials
name: access_key
---
kind: secret
name: RANCHER_SECRET_KEY_PRODUCTION
get:
path: rancher-prod-credentials
name: secret_key
We can now LIMIT the access to specific secrets.
For example, we decided that for deploy in production we must have a TAG RELEASE, and nobody without it should be able to deploy. How to do it?
We add a rule to the secret with a specific X-Drone annotation:

Now, when the pipeline will try to access to production credentials, they exist only if we are in a GIT RELEASE event.
So:
- we can use PRODUCTION ENVS only in a tag event
- we added a check in our .drone.yml that the commit of our tag release must exist in the master branch
- we added a check in our .drone.yml that exists a master image with same commit of the tag release
If you want, you can also move the k8s-envs.yml to a secret and load from the pipeline:
---
kind: pipeline
name: deploy
...
steps:
- name: read-k8s-envs
image: busybox
commands:
- echo -e "$K8S_ENVS" > k8s-envs.yml
environment:
K8S_ENVS:
from_secret: K8S_ENVS
- name: rancher-deploy
image: uala/drone-rancher-deploy
settings:
config: k8s-envs.yml
In this way you centralize the config of environments without any risk that 2 devs deploy different branches to the same environment or somebody can accidentally brake the yaml file.
But wait, the .drone.yml file used by the pipeline is inside the branch, so anybody could remove our checks… Right? Yes, but we can avoid this.
Drone has a great feature that allows to apply a SIGNATURE to our .drone.yml file.
This signature is generated from Drone server, and is applied inside the .drone.yml, so if somebody try to change the file, Drone will block the execution of the pipeline.
Let’s see how to do it.
First of all, you have to protect your repository from Drone UI:

Then you have generate a SIGNATURE on the .drone.yml using the Drone CLI:
drone --server https://your.drone.url -t $DRONE_TOKEN sign YOUR/REPO --save
The –save flag write directly the signature to the yml.
You can find the DRONE_TOKEN inside the Drone UI:

Multi-stage Builds
A little note about multistage builds.
One of our FRONTEND APPS works with a multistage build, something like this:
FROM node:10.15.3-alpine as builder
RUN mkdir -p /usr/src/app /usr/src/app/deploy
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
COPY ./app/package*.json /usr/src/app/
RUN npm install
COPY ./app /usr/src/app
RUN npm run build
COPY ./docker-entrypoint.sh \
./nginx.conf \
./auth.htpasswd \
./.env.dist /usr/src/app/deploy/
FROM nginx:1.16-alpine
RUN apk add --update bash nano && rm -rf /var/cache/apk/*
ENV PROJECT_PATH /usr/share/nginx
COPY --from=builder /usr/src/app/deploy $PROJECT_PATH/deploy
COPY --from=builder /usr/src/app/build $PROJECT_PATH/html
COPY --from=builder /usr/src/app/deploy/nginx.conf /etc/nginx/conf.d/default.conf
RUN chmod 755 $PROJECT_PATH/deploy/docker-entrypoint.sh
EXPOSE 80
EXPOSE 8080
ENTRYPOINT ["/usr/share/nginx/deploy/docker-entrypoint.sh"]
In this case, it could be very difficult to use the DOCKER CACHE in pipelines, but there is a way:
---
kind: pipeline
name: build-docker-image
steps:
- name: build-docker-image-builder
image: plugins/docker
settings:
repo: ${DRONE_REPO,,}
target: builder
tags:
- ${DRONE_SOURCE_BRANCH/\//-}-builder
cache_from:
- ${DRONE_REPO,,}:${DRONE_SOURCE_BRANCH/\//-}-builder
username:
from_secret: DOCKER_USERNAME
password:
from_secret: DOCKER_PASSWORD
when:
event:
exclude:
- tag
- name: build-docker-image-branch
image: plugins/docker
settings:
repo: ${DRONE_REPO,,}
tags:
- ${DRONE_SOURCE_BRANCH/\//-}
- ${DRONE_SOURCE_BRANCH/\//-}-${DRONE_COMMIT_SHA:0:8}
cache_from:
- ${DRONE_REPO,,}:${DRONE_SOURCE_BRANCH/\//-}
- ${DRONE_REPO,,}:${DRONE_SOURCE_BRANCH/\//-}-builder
username:
from_secret: DOCKER_USERNAME
password:
from_secret: DOCKER_PASSWORD
when:
event:
exclude:
- tag
- name: build-docker-image-tag
image: plugins/docker
settings:
repo: ${DRONE_REPO,,}
tags:
- ci-${DRONE_TAG/\//-}
cache_from:
#- ${DRONE_REPO,,}:feature-drone-ci-master-tag-builder
- ${DRONE_REPO,,}:master-builder
#- ${DRONE_REPO,,}:feature-drone-ci-master-tag-${DRONE_COMMIT_SHA:0:8}
- ${DRONE_REPO,,}:master-${DRONE_COMMIT_SHA:0:8}
username:
from_secret: DOCKER_USERNAME
password:
from_secret: DOCKER_PASSWORD
when:
event:
- tag
As you can see, the trick is generate a partial build with only “the builder”, so you can use docker cache.
The parameter to use with the official drone plugin is TARGET, with the name of the stage you want to build.
In this way we create a specific tag for the builder, and then we use it as a cache for the next build.
The build time is much better now:

Conclusions
The trip for arrive to have a good CI/CD strategy has been long, but we want to share our story because we think that maybe somebody in the future will benefit from it following the same way.
Now we can finally start the migration of our 600+ containers to Rancher 2, and it will be another long trip.
[…] Now everything is ready, only one thing is missing…a .drone.yml file inside your repo.Are you interested about how we created our pipelines? Let’s see in the next part! […]
This is amazing stuff. Can you please post a sample pipeline on github with all the concepts integrated? It would be far easier to follow and see the pipeline at once..