Part 4 – Choose a CI/CD tool for Rancher 2 and K8S

In the last post of this series, we talked about how we prepared and deployed the Rancher 2 environment.

After that, we created a new K8S cluster where we added new dedicated hosts for our main infrastructure.

Now, the most important step before start moving all our services, it’s be able to deploy them with a solid and efficient CI/CD strategy.

Our requirements for a good CI/CD tool

With Rancher 1 we used Jenkins on a dedicated host, with lots of tricks and customisations for use it in the “Docker way” and also run backend tests inside the just built Docker image.
We made also a Rancher wrapper that allow us to deploy to Rancher 1 directly from Jenkins.

For the new infrastructure, we looked for a good CI/CD tools that allow us to:

  • Deploy directly on Rancher / K8S
  • Scale on multiple hosts
  • Install in our infrastructure (self hosted), better if it’s free
  • Light and efficient

Choose the right CI/CD tool for us

The first question: can we still use Jenkins? We found the great project Blue Ocean that promise to simplify continuous delivery in an easy way. Yes, it could work, but under the hood Jenkins is still there with a big Java footprint, outdated/risky plugins and lots of issues that appear daily. A better UI cannot hide that the engine is always the same.

We then tested GitLab, the CI/CD part has grown a lot in last years and now it’s really a great tool…with a single problem: they’re removing support to use a GitHub repo as a source for the CI/CD. This is a big issue for us because we don’t want to move 100+ repos to GitLab only to use the CI/CD part.

Rancher2 has a builtin option for pipelines, but it’s a wrapper for jenkins, it’s very limited and it deploys a jenkins instance for every project… of course not the best solution.

Last tool we tested is drone.io and it’s really the tool we was looking for: it’s simple, efficient, scalable, and it’s made for Docker, because every single step of the pipeline runs in a Docker container.

Install Drone

We are new to the Kubernetes world, and the mantra we found everywhere is “there is a helm chart for everything” and yes, most of time it’s true, but not always it works as expected.

For Drone, the last version is very simple to deploy, so we proceed manually.

First of all, we have to create the server part, in a new workload:

We have also to define 2 volumes:

The first is the “data volume”: Drone creates a sqlite database and persists to a container volume at /data. To prevent dataloss, you can mount the data volume to the host machine or to any persistent storage.
Keep in mind that if you choose to mount to the host machine, you have to run the server always on the same node, defined in the “Node Scheduling” part.

The second one is the docker socket because Drone requires access to your host machine Docker socket. This is used to launch pipelines in Docker containers on the host machine.

We need also some ENVs for make it running, these are what we used.

GITHUB:

DRONE_GITHUB_CLIENT_ID = 
DRONE_GITHUB_CLIENT_SECRET =
DRONE_GITHUB_SERVER = https://github.com

We use GitHub, so we have to add credentials for access repositories.
Follow this guide for know how to generate these credentials.

AGENTS:

 DRONE_AGENTS_ENABLED = true
 DRONE_RPC_SECRET =

If you want so use agents, you can set the first ENV to true.
You have also to create a SECRET KEY for allow agents communication to the server.
You can generate a random one with this command in any unix console:

$ openssl rand -hex 16

LOGS:

 DRONE_LOGS_COLOR = true
 DRONE_LOGS_DEBUG = true
 DRONE_LOGS_PRETTY = true

These ENVs are only for debug, you can skip if all work fine 🙂

KUBERNETES:

 DRONE_KUBERNETES_ENABLED = true
 DRONE_KUBERNETES_NAMESPACE = drone

We have to tell to Drone that we are in k8s and what’s the current namespace.

SERVER URL:

 DRONE_SERVER_HOST = your.drone.url
 DRONE_SERVER_PROTO = https

The domain where your drone instance is running

USERS AND ACCESS:

 DRONE_USER_CREATE = username:YOUR_GITHUB_USERNAME,admin:true
 DRONE_USER_FILTER = YOUR_GITHUB_ORG

By default drone instance is free to access, but we have to limit it for avoid that anyone can access and run their pipeline. These two ENVs set a specific user as ADMIN of drone instance and allow only to users of a specific GitHub organization to access.

KUBERNETES SECRETS:

 DRONE_SECRET_ENDPOINT = http://drone-secrets:3000
 DRONE_SECRET_SECRET =

You can set these ENVs if you’ll use k8s secrets in your pipeline. I’ll explain better in the next post.

AGENTS!

Our server is running, but we configured it to use agents for execute our pipeline, so let’s create them!

Here we have to mount the docker socket too, and set some ENVs:

SERVER URL:

 DRONE_RPC_SERVER = https://your.drone.url
 DRONE_RPC_SECRET =

RUNNER CAPACITY:

 DRONE_RUNNER_CAPACITY = 3

An integer defining the maximum number of pipelines the agent should execute concurrently. 

KUBERNETES:

 DRONE_KUBERNETES_ENABLED = true
 DRONE_KUBERNETES_NAMESPACE = drone

KUBERNETES SECRETS:

 DRONE_SECRET_ENDPOINT = http://drone-secrets:3000
 DRONE_SECRET_SECRET =

Activate a repository in Drone

Now you can open your Drone URL and ACTIVATE a repository for trigger push/tag requests and execute a pipeline.
In the settings part there are few options to set (but they are enough).

Most important parts are Project settings and Project visibility.
The first section allow to “protect” a pipeline, it means that if enabled it blocks pipeline if the YAML signature cannot be verified. This avoid that somebody can change your pipeline without authorization.
The trusted flag enables privileged capabilities: an ability to start privileged containers and mount host machine volumes (you should not use it).
In the project visibility you can select what kind of visibility the project should have. A public visibility means that anybody with the url of your drone instance can see your pipelines execution, with no login required.

Conclusion

Now everything is ready, only one thing is missing…a .drone.yml file inside your repo.
Are you interested about how we created our pipelines? Let’s see in the next part!