Install a multi node kubernetes cluster using `kind` (kind.sigs.k8s.io)

In an earlier blog post we discussed on how we can deploy a Kubernetes cluster using kind tool in less than 2 mins! If you missed it, please check this article before proceeding here to understand what `kind` is all about and its basic usage.

To summarize:

kind is a tool for running local Kubernetes clusters using Docker container “nodes”.
kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.

In this article we will see how can we deploy a multi-node cluster ( 3 masters + 3 workers) again less than 2 mins :). Because at times we have to play with features like taints, tolerations and node affinities. To really dive deep into these features or if you want to play with such configurations, you need to have more than one node available.

The prerequisites are really minimal to install a multi-node cluster and I would say it’s pretty much that you just need below `YAML` file.

For ex:

[terminal]
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
– role: control-plane
– role: control-plane
– role: control-plane
– role: worker
– role: worker
– role: worker
[/terminal]

Even though the entries in the above YAML file is self-explanatory, let me touch upon those. Actually the `role` part says the master and worker node configuration. That said, 3 master and 3 worker nodes.

Without much delay lets create a multi-node cluster. I have saved above YAML configuration file in `multi.yaml`, which is going to be the configuration file path we pass with `–config` option below.

[terminal]
[humble@localhost ~]$ sudo kind create cluster –name=multi –config=multi.yaml
Creating cluster “multi” …
✓ Ensuring node image (kindest/node:v1.18.2) ?
✓ Preparing nodes ? ? ? ? ? ?
✓ Configuring the external load balancer ⚖️
✓ Writing configuration ?
✓ Starting control-plane ?️
✓ Installing CNI ?
✓ Installing StorageClass ?
✓ Joining more control-plane nodes ?
✓ Joining worker nodes ?
Set kubectl context to “kind-multi”
You can now use your cluster with:

kubectl cluster-info –context kind-multi

Not sure what to do next? ? Check out https://kind.sigs.k8s.io/docs/user/quick-start/
[humble@localhost ~]$

[/terminal]

WoW! Is it the effort to deploy a multi-node Kubernetes cluster? yeah! Amazing.

Just to confirm quickly we have the nodes available and running:

[terminal]
[humble@localhost ~]$ sudo kubectl config use-context kind-multi
Switched to context “kind-multi”.
[humble@localhost ~]$ sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
multi-control-plane Ready master 18m v1.18.2
multi-control-plane2 Ready master 17m v1.18.2
multi-control-plane3 Ready master 15m v1.18.2
multi-worker Ready 14m v1.18.2
multi-worker2 Ready 14m v1.18.2
multi-worker3 Ready 14m v1.18.2
[humble@localhost ~]$
[/terminal]

It is not just nodes other resources are also available in this cluster, for example: storage class.

[terminal]
[humble@localhost ~]$ sudo kubectl describe sc
Name: standard
IsDefaultClass: Yes
Annotations: kubectl.kubernetes.io/last-applied-configuration={“apiVersion”:”storage.k8s.io/v1″,”kind”:”StorageClass”,”metadata”:{“annotations”:{“storageclass.kubernetes.io/is-default-class”:”true”},”name”:”standard”},”provisioner”:”rancher.io/local-path”,”reclaimPolicy”:”Delete”,”volumeBindingMode”:”WaitForFirstConsumer”}
,storageclass.kubernetes.io/is-default-class=true
Provisioner: rancher.io/local-path
Parameters:
AllowVolumeExpansion:
MountOptions:
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events:
[humble@localhost ~]$
[/terminal]

Lets try to get some more details about the nodes:

[terminal]
[humble@localhost ~]$ sudo kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
multi-control-plane Ready master 26m v1.18.2 172.18.0.8 Ubuntu 19.10 5.6.8-200.fc31.x86_64 containerd://1.3.3-14-g449e9269
multi-control-plane2 Ready master 25m v1.18.2 172.18.0.7 Ubuntu 19.10 5.6.8-200.fc31.x86_64 containerd://1.3.3-14-g449e9269
multi-control-plane3 Ready master 23m v1.18.2 172.18.0.4 Ubuntu 19.10 5.6.8-200.fc31.x86_64 containerd://1.3.3-14-g449e9269
multi-worker Ready 22m v1.18.2 172.18.0.9 Ubuntu 19.10 5.6.8-200.fc31.x86_64 containerd://1.3.3-14-g449e9269
multi-worker2 Ready 22m v1.18.2 172.18.0.6 Ubuntu 19.10 5.6.8-200.fc31.x86_64 containerd://1.3.3-14-g449e9269
multi-worker3 Ready 22m v1.18.2 172.18.0.5 Ubuntu 19.10 5.6.8-200.fc31.x86_64 containerd://1.3.3-14-g449e9269
[humble@localhost ~]$
[/terminal]

Don’t get confused with the kernel listing, just to make it clear, `#uname -r output `5.6.8-200.fc31.x86_64` from my system matches with the above listing.

Let us get some more information about this cluster:

[terminal]
[humble@localhost ~]$ sudo kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:46223
KubeDNS is running at https://127.0.0.1:46223/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.
[humble@localhost ~]$
[/terminal]

Below shows some more details about these cluster nodes, I have truncated below output to show just one master and one worker node:
[terminal]
humble@localhost ~]$ sudo kubectl describe nodes
Name: multi-control-plane
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multi-control-plane
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 10 May 2020 20:15:40 +0530
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: multi-control-plane
AcquireTime:
RenewTime: Sun, 10 May 2020 23:21:57 +0530
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
—- —— —————– —————— —— ——-
MemoryPressure False Sun, 10 May 2020 23:19:27 +0530 Sun, 10 May 2020 20:15:40 +0530 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 10 May 2020 23:19:27 +0530 Sun, 10 May 2020 20:15:40 +0530 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 10 May 2020 23:19:27 +0530 Sun, 10 May 2020 20:15:40 +0530 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 10 May 2020 23:19:27 +0530 Sun, 10 May 2020 20:16:31 +0530 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 172.18.0.8
Hostname: multi-control-plane
Capacity:
cpu: 4
ephemeral-storage: 103077688Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 19885692Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 103077688Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 19885692Ki
pods: 110
System Info:
Machine ID: 090f76515dde4e02a5edf4dd513db6bb
System UUID: c594630a-9d52-4dbe-b70d-aaec20124dc8
Boot ID: 51e95ed3-91b9-492d-9b1b-6ff87def8d10
Kernel Version: 5.6.8-200.fc31.x86_64
OS Image: Ubuntu 19.10
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.3.3-14-g449e9269
Kubelet Version: v1.18.2
Kube-Proxy Version: v1.18.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
——— —- ———— ———- ————— ————- —
kube-system coredns-66bff467f8-4djpt 100m (2%) 0 (0%) 70Mi (0%) 170Mi (0%) 3h5m
kube-system coredns-66bff467f8-szq2g 100m (2%) 0 (0%) 70Mi (0%) 170Mi (0%) 3h6m
kube-system etcd-multi-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h6m
kube-system kindnet-d5qts 100m (2%) 100m (2%) 50Mi (0%) 50Mi (0%) 3h5m
kube-system kube-apiserver-multi-control-plane 250m (6%) 0 (0%) 0 (0%) 0 (0%) 3h6m
kube-system kube-controller-manager-multi-control-plane 200m (5%) 0 (0%) 0 (0%) 0 (0%) 3h6m
kube-system kube-proxy-bbbfl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h6m
kube-system kube-scheduler-multi-control-plane 100m (2%) 0 (0%) 0 (0%) 0 (0%) 3h6m
local-path-storage local-path-provisioner-bd4bb6b75-8w5d5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h5m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
——– ——– ——
cpu 850m (21%) 100m (2%)
memory 190Mi (0%) 390Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
……

Name: multi-worker
Roles:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multi-worker
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 10 May 2020 20:19:57 +0530
Taints:
Unschedulable: false
Lease:
HolderIdentity: multi-worker
AcquireTime:
RenewTime: Sun, 10 May 2020 23:21:57 +0530
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
—- —— —————– —————— —— ——-
MemoryPressure False Sun, 10 May 2020 23:19:29 +0530 Sun, 10 May 2020 20:19:57 +0530 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 10 May 2020 23:19:29 +0530 Sun, 10 May 2020 20:19:57 +0530 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 10 May 2020 23:19:29 +0530 Sun, 10 May 2020 20:19:57 +0530 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 10 May 2020 23:19:29 +0530 Sun, 10 May 2020 20:21:12 +0530 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 172.18.0.9
Hostname: multi-worker
Capacity:
cpu: 4
ephemeral-storage: 103077688Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 19885692Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 103077688Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 19885692Ki
pods: 110
System Info:
Machine ID: 24e26961e57440958576287c7b29145a
System UUID: a50a5bd5-81b4-4789-8180-0f4f986c09b8
Boot ID: 51e95ed3-91b9-492d-9b1b-6ff87def8d10
Kernel Version: 5.6.8-200.fc31.x86_64
OS Image: Ubuntu 19.10
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.3.3-14-g449e9269
Kubelet Version: v1.18.2
Kube-Proxy Version: v1.18.2
PodCIDR: 10.244.4.0/24
PodCIDRs: 10.244.4.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
——— —- ———— ———- ————— ————- —
kube-system kindnet-95jdp 100m (2%) 100m (2%) 50Mi (0%) 50Mi (0%) 3h2m
kube-system kube-proxy-5vv58 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h2m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
——– ——– ——
cpu 100m (2%) 100m (2%)
memory 50Mi (0%) 50Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
………..
[/terminal]

It’s the time to deploy a POD and see whether it’s working 🙂

Let us get a job.yaml:

[terminal]
[humble@localhost ~]$ cat job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: hello
spec:
template:
# This is the pod template
spec:
containers:
– name: hello
image: busybox
command: [‘sh’, ‘-c’, ‘echo “Hello, Kubernetes!” && sleep 3600’]
restartPolicy: OnFailure
# The pod template ends here
[humble@localhost ~]$
[/terminal]

Let us create the above `job` resource in this cluster.

[terminal]
[humble@localhost ~]$ sudo kubectl create -f job.yaml
job.batch/hello created
[humble@localhost ~]$ sudo kubectl get pods -n default -w
NAME READY STATUS RESTARTS AGE
hello-mm7h2 0/1 ContainerCreating 0 18s
[humble@localhost ~]$ sudo kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
hello-mm7h2 1/1 Running 0 79s

[humble@localhost ~]$ sudo kubectl get jobs
NAME COMPLETIONS DURATION AGE
hello 1/1 61m 16h
[humble@localhost ~]$ sudo kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
hello-mm7h2 0/1 Completed 0 16h
[humble@localhost ~]$ sudo kubectl logs hello-mm7h2 -n default
Hello, Kubernetes!
[humble@localhost ~]$

[/terminal]

voila!! so this cluster works and you can deploy applications in it.

Digiprove sealCopyright secured by Digiprove © 2020 Humble Chirammal