Ludovic Alarcon

Ludovic Alarcon .

Kubernetes and Cloud enthusiast - Dotnet Developer

Kubeadm + containerd

Setup a local kubernetes cluster with Kubeadm

In this post, we will see how to setup a local kubernetes 1.23 cluster bootstrapped using Kubeadm.
The setup will be the following:


Create 3 Virtual Machines:

The Why

First of all why setup a local cluster with kubeadm over other tools like Minikube, Kind, K3S, microk8s, etc… ?
I have three reasons for wanting do that for my learning purpose:

Step 1: Prepare all nodes

All this step should be executed on all nodes.

First, we will update and install some packages that we will need later

sudo apt-get update -y
sudo apt-get install -y ca-certificates curl apt-transport-https libseccomp2

Swap should be disabled in order for kubelet to work properly.
Since Kubernetes 1.8, a kubelet flag fail-on-swap has been set to true by default, that means swap is not supported by default.
More more information on the Why, there is a great article

# disable swap
sudo sed -i "/ swap / s/^/#/" /etc/fstab
sudo swapoff -a

We also need to let IPtables see bridged traffic

# Configure IPTables to see bridged traffic
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf

cat <<EOF | sudo tee /etc/sysctl.d/k8s-cri-containerd.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1

# Apply sysctl without reboot
sudo sysctl --system

Now, let’s setup Containerd

# Get containerd
wget -q
wget -q
sha256sum --check cri-containerd-cni-1.5.8-linux-amd64.tar.gz.sha256sum

# Install containerd
sudo tar --no-overwrite-dir -C / -xzf cri-containerd-cni-1.5.8-linux-amd64.tar.gz

# Systemd drop-in for containerd
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service.d/0-containerd.conf
Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"

sudo systemctl daemon-reload
sudo systemctl start containerd

Finally, let’s install Kubeadm, Kubelet and Kubectl

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet=1.23.1-00 \
                        kubeadm=1.23.1-00 \
sudo apt-mark hold kubelet=1.23.1-00 kubeadm=1.23.1-00 kubectl=1.23.1-00

Step 2: Setup the control plane node

We will run the kubeadm init command on the control plane. We will use the following CIDR for the pod network needed by Calico CNI that will be installed later on.

Note: The IP adress I assigned to my control plane is the following

#Init Kubeadm
sudo kubeadm init --apiserver-advertise-address="" --apiserver-cert-extra-sans="" --pod-network-cidr="" --node-name=$(hostname -s)

We need to retrieve the kubeconfig file.

cp /etc/kubernetes/admin.conf ~/.kube/config

You can also copy the file to your local machine to be able to access the cluster from it.

Now let’s install Calico

# Install Calico
curl -O
kubectl apply -f calico.yaml

And also the metrics server

# Install metrics server
curl -OL
sed -i "/--metric-resolution/a\        - --kubelet-insecure-tls" components.yaml
kubectl apply -f components.yaml
rm components.yaml

Finally, we need to retrieve the kubeadm token to be able to join worker nodes to the control plane. Keep the output, we will use it just after.

kubeadm token create --print-join-command

Step 3: Setup worker nodes

All this step should be executed on all worker nodes.

First, copy the kubeconfig file from the controle plane node (~/.kube/config) to your local machine and worker nodes (put it also in ~/.kube/config).

We just need to run the kubeadm join command we retrieve ealier in order to add the worker nodes to the cluster.

# The join command is the output of the kubeadm token command previously executed
kubeadm join --token ggzo41.07ycxprqxi04msXY --discovery-token-ca-cert-hash sha256:b2b4fe7994327c32edb226ec843396391fb3674ab8970f558873fc34b2ed669b

We will also add a label on worker nodes

kubectl label node $(hostname -s)

We have now setup a kubernetes 1.23 cluster with kubeadm using containerd and calico.

kubectl get nodes

NAME            STATUS   ROLES                  AGE   VERSION
master-node     Ready    control-plane,master   10m   v1.23.5
worker-node01   Ready    worker                  2m   v1.23.5
worker-node02   Ready    worker                  1m   v1.23.5

On last step we will restart coredns and metrics server.

kubectl rollout restart deploy/coredns -n kube-system
kubectl rollout restart deploy metrics-server -n kube-system

Step 4: Test our cluster with Nginx

If you didn’t do it before, copy the kubeconfig file from the controle plane node (~/.kube/config) to your local machine. Or you can execute the command directly from a node.

Run and expose as NodePort Nginx to test our cluster.

kubectl run nginx --image=nginx
pod/nginx created

kubectl expose pod nginx --type=NodePort --port 80
service/nginx exposed

kubectl get pods
nginx   1/1     Running   0          28s

kubectl get svc
nginx   NodePort   <none>        80:31062/TCP  42s

We can now access the service using node IP on port 31062. (Make sure your firewall rules are properly set)


<!DOCTYPE html>
<title>Welcome to nginx!</title>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href=""></a>.<br/>
Commercial support is available at
<a href=""></a>.</p>

<p><em>Thank you for using nginx.</em></p>

Step 5: Automation

In order to create/destroy clusters on demand, I did some automation with Vagrant using virtualBox and bash script.
Everything is on my github

cd kubeadm-vagrant-cluster
# To create a cluster
vagrant up
# To destroy a cluster
vagrant destroy