Part 3 – A new infrustructure for Rancher 2 and K8S
In a previous post we talked about our current situation with Rancher 1, and what bring us to upgrade to Rancher 2.
Now, it’s the time for a new infrustructure.
Choose servers
First of all, we have to decide where Rancher 2 should stay, and the options are 2:
- An AWS EC2 instance, with a single Rancher 2 installation
- A kubernetes cluster with an helm installation of Rancher 2 HA
The cost is always one of most relevant points, together with stability and good performances.
An EC2 instance could be a good choice for stability: of course the uptime compared to a dedicated host should be better.
But we should have an instance with decent performances, like for example 2vcpu with 8gb of ram, in a location near us, like Frankfurt.
Monthly cost + storage and network: ~100€/month.
Well.. not so cheap for 2vcpu.
Let’s try the other way.
I’m looking for 3 dedicated hosts, in different datacenters, with different networks.
Checking with my favorite providers, this is what I found:
- Hetzner: i7-6700, 64GB RAM, 2x512GB NVMe SSD -> 39€
- SoYouStart (OVH): Xeon E3-1245v2, 32GB RAM, 2x480GB SSD -> 39,99€
- Online.net (Scaleway): Xeon D-1531, 32GB RAM, 2x500GB SSD -> 39,99€
Total: 119€/month.
So… adding 20€ we can create a kubernetes cluster with 3 high performance servers in different locations, with an High Availability Rancher Installation.
We can also think to use the same cluster for some work, for example CI/CD stuff.
Not so bad right?
Create the Kubernetes Cluster
Now we have our 3 new dedicated hosts, and we have to install a kubernetes cluster.
From where to start? There are many options for install a k8s cluster, but Rancher suggest to use their own tool, RKE.
The process is very simple:
- download the RKE binary
- prepare nodes
- Create a Cluster Configuration File
- Deploy Kubernetes with RKE
Of course, the most interesting part of this process is the Configuration File (cluster.yaml).
The minimal conf is something like this:
nodes:
- address: 1.2.3.4
user: ubuntu
role:
- controlplane
- etcd
- worker
At the beginning, we started with this and our 3 nodes, but after some testing, we found that something useful was missing: the services part:
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
kubelet:
extra_args:
max-pods: 500
node-status-update-frequency: 4s
kube-controller:
extra_args:
node-monitor-period: 2s
node-monitor-grace-period: 16s
pod-eviction-timeout: 30s
Some incomprensible options… but with very important roles:
The etcd part is about “snapshots”, a functionality that create a backup of our cluster every 6 hours, and it’s saved in every node (you should remember to move periodically to a safe location).
Kubelet is the “node agent” of kubernetes, and here we put 2 different options:
– max-pods: It could be impossible to believe, but k8s force a max of 100 pods for EVERY node: it could be an important limit if you have lots of microservices.
– node-status-update-frequency: How often kubelet update the node status to the k8s cluster, 10s by default
Kube-controller is a daemon that control the cluster. Here we put multiple options for a main reason, failover optimization.
Why?
After some testing, we found that if a node goes down, kubernetes set the state of every pod on that node as “unknown”, and it wait 5 MINUTES for drop them and create new pods on other nodes. 5 MINUTES.
With the configuration above, the time from when a node goes down to when failed pods are rebalanced to other nodes is about 45 seconds. Much better. More info about thishere.
So we have our config file, but how we create our cluster? with a simple command in the same directory:
rke up
RKE will do all the magic, and in a couple of minutes our kubernetes cluster is ready!
RKE will also create a kube_config file, for connect to our cluster:
[email protected]:~/# kubectl --kubeconfig kube_config_cluster.yml get nodes
NAME STATUS ROLES AGE VERSION
Node 1 Ready controlplane,etcd,worker 11d v1.11.6
Node 2 Ready controlplane,etcd,worker 11d v1.11.6
Node 3 Ready controlplane,etcd,worker 11d v1.11.6
It’s black magic.
Install Rancher
Now it’s the time to install Rancher, but before do it, we need to install Helm.
Helm is a very useful package management tool for Kubernetes where you can select an app from a catalog, and install with a simple command.
This is what we need for install Rancher and keep it updated.
I will not explain all the process to install Helm because rancher already did it on its documentation.
What we have to do, when Helm is ready, it’s to add the rancher chart repository.
I prefered to use the Stable repository, that you can add with this command:
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
Now a bit of pain, rancher server is designed to be secure by default and requires SSL/TLS configuration.
There are three options for this:
- Use Rancher generated certificates
- Use Let’s Encrypt
- Use your own domain certificates
For the first two options, you have to install “cert-manager”, here you can find more info about how to proceed.
We choosed the last option because we already have a wildcard cert for our domain.
For use our certs, we have o put them in a “secret”, the command is very simple:
kubectl -n cattle-system create secret tls tls-rancher-ingress \
--cert=tls.crt \
--key=tls.key
Keep in mind that tls.crt in this case must include both domain cert file AND ca-bundle file, in this exactly order (if you invert them, it will not work!).
Ok, we are ready for install rancher with the last command.
Set hostname
with the domain you want for rancher, and ingress.tls.source
to secret
for specify that it have to use our certs in secret file:
helm install rancher-stable/rancher \
--name rancher \
--namespace cattle-system \
--set hostname=rancher.my.org \
--set ingress.tls.source=secret
if all is gone good, in some minutes rancher should be available at the domain specified above.
In first minutes, Rancher UI will be extremely SLOW, don’t be scared and wait a couple more of minutes, then all will be fine.

Rancher is ready, and now? We will see in the next part.
[…] Go to Part 3 : A new infrustructure for Rancher 2 and K8S […]
[…] the last post of this series, we talked about how we prepared and deployed the Rancher 2 […]