Introduction
A couple of years ago, we started developing new hardware products for our customers.
It’s a big step for a company that usually works with “software” only: you have to define the scope of your new “thing”, the design, how should it works, and how to maintain it.
The last two points are very important, because we don’t create mainstream products, we create products that OUR customers use with OUR services. If the product doesn’t work, the customer will not use our services.
A big point is that we have to create things (somebody has told “Internet of Things”?) that we can monitor and update with new features in every moment.
Every type of board is custom, and based on an ARM architecture with 1GB of ram.
We chosen to connect them with OpenVPN, so we can control and update them with a secure connection that can bypass common routers and NAT, with a specific configuration so you can connect to a node only from the main server.
We decided to install and run our code inside Docker, with Swarm as orchestration tool for some reasons:
- It had full support to ARM
- It had a small footprint
- It was very easy to install
We also tried with K8S, but we had lots of problems with network inside the VPN, it didn’t have full support for ARM and the footprint was very big for 1GB RAM. (somebody is still working on it).
Now after 2 years, our IoT products are distributed around all Europe, Swarm still doesn’t have a good UI to manage it (Portainer is the only one, but it doesn’t work very well) and the world is going with K8S.
So, is it the time to move to K8S? How?
K3S, the lightweight Kubernetes distribution
Hopefully, Rancher has released K3S!
What they did? They created a single binary that is:
- Fully compatible with ARM
- Very simple to use
- It includes all you need for run K8S in 40MB (also you don’t need Docker to use it)
- It requires hardware with only 512MB or RAM
We tried it and it’s really fantastic.
The logic of k3s is really simple, there is a server process that runs the stuff about k8s and every node has only an agent that wrap all the communication to the server:
Also, there is no incompatibility between Swarm and K8S / K3S, so they can run together.
So first of all we installed the server with this command:
curl -sfL https://get.k3s.io | sh -s - --node-ip=X.X.X.X --node-name=MASTER-NODE --bind-address X.X.X.X --tls-san=X.X.X.X
K3S will start a “server” instance using containerd (you don’t need Docker anymore) and will configure the master node with our specific VPN address.
To check if all is working, we can use some useful commands:
Server logs:
journalctl -u k3s
List of nodes:
k3s kubectl get nodes
And now we can start adding worker nodes (“Agents”) with this command, passing the URL / IP of the server:
curl -sfL https://get.k3s.io | K3S_URL=https://X.X.X.X:6443 K3S_TOKEN=XXXX sh -
The secret token must be read from the server with this command:
cat /var/lib/rancher/k3s/server/node-token
If we want to run the agent with docker instead of containerd, we can add this parameter to the initial command:
INSTALL_K3S_EXEC="--docker"
And then, for check logs of agent we can run:
journalctl -u k3s-agent
As you can see, the agent footprint is very small (this is a t2.nano on EC2):

Conclusion
With a single command and a bash script we added to our new K3S cluster all boards connected to the VPN.
Of course K3S is fully compatible with Rancher, so we added this cluster to our Rancher HA for orchestrate it as we do every day with our servers:

We finally converted every stack from the swarm cluster to workloads in K3S and disabled nodes in swarm.