Running kubernetes conformance testing in a kind cluster on ARM64

Before we start:

Conformance tests in Kubernetes are a set of automated tests designed to verify that a Kubernetes cluster adheres to the official Kubernetes API and behaves according to the expected standards. These tests ensure that a Kubernetes deployment is functioning correctly, is compatible with the broader Kubernetes ecosystem, and can interact seamlessly with other components.
By passing conformance tests, Kubernetes distributions and cloud providers can demonstrate that their systems are certified to meet the essential features of the platform, such as resource scheduling, networking, and security policies. The advantages of conformance tests include improved reliability, compatibility, and trust, enabling developers and organizations to deploy Kubernetes in production with confidence. They also foster a consistent experience across different environments and facilitate the identification of issues early in the development process.

A Kind (Kubernetes IN Docker) cluster is a tool for running local Kubernetes clusters using Docker containers as nodes. It is primarily designed for testing Kubernetes clusters and applications on Kubernetes in a lightweight, isolated environment. Kind creates a Kubernetes cluster in Docker containers, making it easy to set up and tear down clusters quickly without the need for dedicated virtual machines or cloud resources.

TL;DR
– Create a KIND cluster
– Create E2E test binary
– Run Conformance tests by setting the context to KIND cluster

Lets run Kubernetes Conformance Tests in KIND:

KIND Cluster preparation:

Build Node Image, Define config.yaml, Set KubeContext

In your kubernetes branch:

Create your kind node image:

kind build node-image

Create your kind e2e cluster config kind-config-yaml:


# necessary for conformance
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
ipFamily: ipv4
nodes:
# the control plane node
- role: control-plane
- role: worker
- role: worker

Set your KUBECONFIG env variable (KIND generates the conf based on it):

export KUBECONFIG="${HOME}/.kube/kind-test-config"

Use the previous config to create your cluster:

kind create cluster --config kind-config.yaml --image kindest/node:latest -v4

E2E Binary Preparation:

Create your e2e Kubernetes binary (from your Kubernetes src code):


make WHAT="test/e2e/e2e.test"

Execute your tests:

./_output/bin/e2e.test -context kind-kind -ginkgo.focus="\[sig-network\].*Conformance" -num-nodes 2

Long Version from my setup;

To expand on this and for completeness, let’s say that you have a change in kube-proxy, you don’t have to redo all the steps, you just need to:

build the new image:

bazel build //build:docker-artifacts

load it in your current kind cluster:


kind load image-archive bazel-bin/build/kube-proxy.tar

check that it has been loaded:


docker exec -it kind-control-plane crictl images
IMAGE TAG IMAGE ID SIZE
docker.io/kindest/kindnetd 0.5.4 2186a1a396deb 113MB
docker.io/rancher/local-path-provisioner v0.0.11 9d12f9848b99f 36.5MB
k8s.gcr.io/coredns 1.6.5 70f311871ae12 41.7MB
k8s.gcr.io/debian-base v2.0.0 9bd6154724425 53.9MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90d 290MB
k8s.gcr.io/kube-apiserver v1.17.0 4a0e3a87a5e22 144MB
k8s.gcr.io/kube-controller-manager v1.17.0 fa313a582b872 131MB
k8s.gcr.io/kube-proxy-amd64 v1.18.0-alpha.1.633_8428af6fd79e1f-dirty 5dd06b2bb1290 124MB
k8s.gcr.io/kube-proxy v1.17.0 e3dd0e2bea53a 132MB
k8s.gcr.io/kube-scheduler v1.17.0 cb8feb1d83dd3 112MB
k8s.gcr.io/pause 3.1 da86e6ba6ca19 746kB

and modify the kubernetes deployment to pick the new image, in this case you can follow these instructions to switch the container image of the daemonset
https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/ , please refer https://github.com/kubernetes-sigs/kind/issues/1181

In my setup, it was first failing due to the lack of space as shown below


github.com/coredns/caddy/caddyfile: mkdir /tmp.k8s/go-build2193697665/b2501/: no space left on device
k8s.io/system-validators/validators: mkdir /tmp.k8s/go-build2193697665/b2525/: no space left on device
!!! [0429 13:28:44] Call tree:
!!! [0429 13:28:44] 1: /go/src/k8s.io/kubernetes/hack/lib/golang.sh:771 kube::golang::build_some_binaries(...)
!!! [0429 13:28:44] 2: /go/src/k8s.io/kubernetes/hack/lib/golang.sh:941 kube::golang::build_binaries_for_platform(...)
!!! [0429 13:28:44] 3: hack/make-rules/build.sh:27 kube::golang::build_binaries(...)
!!! [0429 13:28:44] Call tree:
!!! [0429 13:28:44] 1: hack/make-rules/build.sh:27 kube::golang::build_binaries(...)
!!! [0429 13:28:44] Call tree:
!!! [0429 13:28:44] 1: hack/make-rules/build.sh:27 kube::golang::build_binaries(...)
make: *** [Makefile:92: all] Error 1
!!! [0429 13:28:44] Call tree:
!!! [0429 13:28:44] 1: build/../build/common.sh:488 kube::build::run_build_command_ex(...)
!!! [0429 13:28:44] 2: build/release-images.sh:40 kube::build::run_build_command(...)
make: *** [quick-release-images] Error 1

The error reported here is : “NO SPACE LEFT ON DEVICE’ which can be resolved by adjusting the resoures in DOCKER DESKTOP IN MAC.


chumble2TR91:kubernetes chumble$ kind build node-image
Starting to build Kubernetes
+++ [0429 13:31:56] Verifying Prerequisites....
+++ [0429 13:31:56] Using docker on macOS
+++ [0429 13:31:58] Building Docker image kube-build:build-45f9a540d8-5-v1.27.0-go1.20.3-bullseye.0
+++ [0429 13:32:03] Syncing sources to container
+++ [0429 13:32:07] Running build command...
+++ [0429 13:32:09] Setting GOMAXPROCS: 10
+++ [0429 13:32:09] Building go targets for linux/arm64
k8s.io/kubernetes/cmd/kube-apiserver (static)
k8s.io/kubernetes/cmd/kube-controller-manager (static)
k8s.io/kubernetes/cmd/kube-scheduler (static)
k8s.io/kubernetes/cmd/kube-proxy (static)
k8s.io/kubernetes/cmd/kubectl (static)
k8s.io/kubernetes/cmd/kubeadm (static)
k8s.io/kubernetes/cmd/kubectl (static)
k8s.io/kubernetes/cmd/kubelet (non-static)
+++ [0429 13:33:21] Syncing out of container
+++ [0429 13:33:31] Building images: linux-arm64
+++ [0429 13:33:31] Starting docker build for image: kube-apiserver-arm64
+++ [0429 13:33:31] Starting docker build for image: kube-controller-manager-arm64
+++ [0429 13:33:31] Starting docker build for image: kube-scheduler-arm64
+++ [0429 13:33:31] Starting docker build for image: kube-proxy-arm64
+++ [0429 13:33:31] Starting docker build for image: kubectl-arm64
+++ [0429 13:33:47] Deleting docker image registry.k8s.io/kubectl-arm64:v1.28.0-alpha.0.530_d8bdddcab42932-dirty
+++ [0429 13:33:47] Deleting docker image registry.k8s.io/kube-scheduler-arm64:v1.28.0-alpha.0.530_d8bdddcab42932-dirty
+++ [0429 13:33:48] Deleting docker image registry.k8s.io/kube-controller-manager-arm64:v1.28.0-alpha.0.530_d8bdddcab42932-dirty
+++ [0429 13:33:54] Deleting docker image registry.k8s.io/kube-proxy-arm64:v1.28.0-alpha.0.530_d8bdddcab42932-dirty
+++ [0429 13:34:10] Deleting docker image registry.k8s.io/kube-apiserver-arm64:v1.28.0-alpha.0.530_d8bdddcab42932-dirty
+++ [0429 13:34:10] Docker builds done
Finished building Kubernetes
Building node image ...
Building in container: kind-build-1682755456-1540799027
Image "kindest/node:latest" build completed.

I have already built KIND node image as you see:


chumble2TR91:kubernetes chumble$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
kindest/node latest da5453583d1d 28 seconds ago 1.32GB
kube-build build-45f9a540d8-5-v1.27.0-go1.20.3-bullseye.0 aaffa6b7ac5f 3 minutes ago 1.26GB
117a3ebe5a7d 7 minutes ago 1.26GB
258a6c82633d 16 minutes ago 1.26GB
kube-build build-a1b83e6b4a-5-v1.27.0-go1.20.3-bullseye.0 0a89e9dbddf9 About an hour ago 1.26GB
kindest/node 0dc0bbd0350c 4 weeks ago 858MB
kindest/base v20230330-89a4b81b 227842213139 4 weeks ago 311MB
projects.registry.vmware.com/tce/kind v1.22.7 fd1e232b07fe 13 months ago 1.14GB

Let us create a new KIND cluster from this Node image and config file


chumble2TR91:kubernetes chumble$ kind create cluster --config kind-config.yaml --image kindest/node:latest -v4
Creating cluster "kind" ...
DEBUG: docker/images.go:58] Image: kindest/node:latest present locally
✓ Ensuring node image (kindest/node:latest) 🖼
✓ Preparing nodes 📦 📦 📦
DEBUG: config/config.go:96] Using the following kubeadm config for node kind-worker2:
apiServer:
certSANs:
- localhost
- 127.0.0.1
extraArgs:
runtime-config: ""
apiVersion: kubeadm.k8s.io/v1beta3
clusterName: kind
controlPlaneEndpoint: kind-control-plane:6443
controllerManager:
extraArgs:
enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.28.0-alpha.0.530+d8bdddcab42932-dirty
networking:
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/16
scheduler:
extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.18.0.4
bindPort: 6443
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
kubeletExtraArgs:
node-ip: 172.18.0.4
node-labels: ""
provider-id: kind://docker/kind/kind-worker2
---
apiVersion: kubeadm.k8s.io/v1beta3
discovery:
bootstrapToken:
apiServerEndpoint: kind-control-plane:6443
token: abcdef.0123456789abcdef
unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
kubeletExtraArgs:
node-ip: 172.18.0.4
node-labels: ""
provider-id: kind://docker/kind/kind-worker2
---
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
cgroupRoot: /kubelet
evictionHard:
imagefs.available: 0%
nodefs.available: 0%
nodefs.inodesFree: 0%
failSwapOn: false
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
conntrack:
maxPerCore: 0
iptables:
minSyncPeriod: 1s
kind: KubeProxyConfiguration
mode: iptables
DEBUG: config/config.go:96] Using the following kubeadm config for node kind-worker:
apiServer:
certSANs:
- localhost
- 127.0.0.1
extraArgs:
runtime-config: ""
apiVersion: kubeadm.k8s.io/v1beta3
clusterName: kind
controlPlaneEndpoint: kind-control-plane:6443
controllerManager:
extraArgs:
enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.28.0-alpha.0.530+d8bdddcab42932-dirty
networking:
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/16
scheduler:
extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.18.0.5
bindPort: 6443
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
kubeletExtraArgs:
node-ip: 172.18.0.5
node-labels: ""
provider-id: kind://docker/kind/kind-worker
---
apiVersion: kubeadm.k8s.io/v1beta3
discovery:
bootstrapToken:
apiServerEndpoint: kind-control-plane:6443
token: abcdef.0123456789abcdef
unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
kubeletExtraArgs:
node-ip: 172.18.0.5
node-labels: ""
provider-id: kind://docker/kind/kind-worker
---
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
cgroupRoot: /kubelet
evictionHard:
imagefs.available: 0%
nodefs.available: 0%
nodefs.inodesFree: 0%
failSwapOn: false
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
conntrack:
maxPerCore: 0
iptables:
minSyncPeriod: 1s
kind: KubeProxyConfiguration
mode: iptables
DEBUG: config/config.go:96] Using the following kubeadm config for node kind-control-plane:
apiServer:
certSANs:
- localhost
- 127.0.0.1
extraArgs:
runtime-config: ""
apiVersion: kubeadm.k8s.io/v1beta3
clusterName: kind
controlPlaneEndpoint: kind-control-plane:6443
controllerManager:
extraArgs:
enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.28.0-alpha.0.530+d8bdddcab42932-dirty
networking:
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/16
scheduler:
extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.18.0.6
bindPort: 6443
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
kubeletExtraArgs:
node-ip: 172.18.0.6
node-labels: ""
provider-id: kind://docker/kind/kind-control-plane
---
apiVersion: kubeadm.k8s.io/v1beta3
controlPlane:
localAPIEndpoint:
advertiseAddress: 172.18.0.6
bindPort: 6443
discovery:
bootstrapToken:
apiServerEndpoint: kind-control-plane:6443
token: abcdef.0123456789abcdef
unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
kubeletExtraArgs:
node-ip: 172.18.0.6
node-labels: ""
provider-id: kind://docker/kind/kind-control-plane
---
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
cgroupRoot: /kubelet
evictionHard:
imagefs.available: 0%
nodefs.available: 0%
nodefs.inodesFree: 0%
failSwapOn: false
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
conntrack:
maxPerCore: 0
iptables:
minSyncPeriod: 1s
kind: KubeProxyConfiguration
mode: iptables
✓ Writing configuration 📜
DEBUG: kubeadminit/init.go:82] I0429 08:07:18.339790 135 initconfiguration.go:255] loading configuration from "/kind/kubeadm.conf"
W0429 08:07:18.340457 135 initconfiguration.go:332] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration
I0429 08:07:18.347847 135 common.go:128] WARNING: tolerating control plane version v1.28.0-alpha.0.530+d8bdddcab42932-dirty as a pre-release version
I0429 08:07:18.348093 135 certs.go:112] creating a new certificate authority for ca
[init] Using Kubernetes version: v1.28.0-alpha.0.530+d8bdddcab42932-dirty
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
I0429 08:07:18.583289 135 certs.go:519] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.18.0.6 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0429 08:07:18.856250 135 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0429 08:07:18.975239 135 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0429 08:07:19.142521 135 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0429 08:07:19.244257 135 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.6 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.6 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0429 08:07:19.636517 135 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0429 08:07:19.724249 135 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0429 08:07:19.774390 135 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0429 08:07:19.915966 135 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0429 08:07:20.049706 135 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0429 08:07:20.131770 135 kubelet.go:67] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0429 08:07:20.229196 135 manifests.go:99] [control-plane] getting StaticPodSpecs
I0429 08:07:20.229436 135 certs.go:519] validating certificate period for CA certificate
I0429 08:07:20.229486 135 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0429 08:07:20.229496 135 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0429 08:07:20.229499 135 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0429 08:07:20.229501 135 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0429 08:07:20.229504 135 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0429 08:07:20.230909 135 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0429 08:07:20.230927 135 manifests.go:99] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0429 08:07:20.231037 135 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0429 08:07:20.231049 135 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0429 08:07:20.231051 135 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0429 08:07:20.231054 135 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0429 08:07:20.231056 135 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0429 08:07:20.231058 135 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0429 08:07:20.231061 135 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0429 08:07:20.231409 135 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0429 08:07:20.231425 135 manifests.go:99] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0429 08:07:20.231520 135 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0429 08:07:20.231741 135 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
W0429 08:07:20.231897 135 images.go:80] could not find officially supported version of etcd for Kubernetes v1.28.0-alpha.0.530+d8bdddcab42932-dirty, falling back to the nearest etcd version (3.5.7-0)
I0429 08:07:20.232222 135 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0429 08:07:20.232242 135 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
I0429 08:07:20.232603 135 loader.go:373] Config loaded from file: /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0429 08:07:20.236145 135 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0429 08:07:20.737046 135 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0429 08:07:21.237061 135 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0429 08:07:21.737819 135 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0429 08:07:22.238113 135 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0429 08:07:22.737134 135 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0429 08:07:24.203201 135 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 966 milliseconds
I0429 08:07:24.237852 135 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 0 milliseconds
I0429 08:07:24.738486 135 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 0 milliseconds
I0429 08:07:25.237609 135 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 0 milliseconds
[apiclient] All control plane components are healthy after 5.503154 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0429 08:07:25.737825 135 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s 200 OK in 0 milliseconds
I0429 08:07:25.737889 135 uploadconfig.go:112] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
I0429 08:07:25.742197 135 round_trippers.go:553] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 3 milliseconds
I0429 08:07:25.745200 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 2 milliseconds
I0429 08:07:25.747917 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 2 milliseconds
I0429 08:07:25.748062 135 uploadconfig.go:126] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0429 08:07:25.750423 135 round_trippers.go:553] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 1 milliseconds
I0429 08:07:25.753871 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 3 milliseconds
I0429 08:07:25.756081 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 2 milliseconds
I0429 08:07:25.756187 135 uploadconfig.go:131] [upload-config] Preserving the CRISocket information for the control-plane node
I0429 08:07:25.756227 135 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///run/containerd/containerd.sock" to the Node API object "kind-control-plane" as an annotation
I0429 08:07:26.258847 135 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/nodes/kind-control-plane?timeout=10s 200 OK in 2 milliseconds
I0429 08:07:26.263093 135 round_trippers.go:553] PATCH https://kind-control-plane:6443/api/v1/nodes/kind-control-plane?timeout=10s 200 OK in 3 milliseconds
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
I0429 08:07:26.766339 135 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/nodes/kind-control-plane?timeout=10s 200 OK in 1 milliseconds
I0429 08:07:26.770489 135 round_trippers.go:553] PATCH https://kind-control-plane:6443/api/v1/nodes/kind-control-plane?timeout=10s 200 OK in 3 milliseconds
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0429 08:07:26.771988 135 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef?timeout=10s 404 Not Found in 1 milliseconds
I0429 08:07:26.774246 135 round_trippers.go:553] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/secrets?timeout=10s 201 Created in 1 milliseconds
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0429 08:07:26.776345 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 1 milliseconds
I0429 08:07:26.778282 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 1 milliseconds
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0429 08:07:26.780015 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 1 milliseconds
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0429 08:07:26.781571 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 1 milliseconds
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0429 08:07:26.783762 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 milliseconds
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0429 08:07:26.783842 135 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig
I0429 08:07:26.784135 135 loader.go:373] Config loaded from file: /etc/kubernetes/admin.conf
I0429 08:07:26.784146 135 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I0429 08:07:26.784288 135 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I0429 08:07:26.786057 135 round_trippers.go:553] POST https://kind-control-plane:6443/api/v1/namespaces/kube-public/configmaps?timeout=10s 201 Created in 1 milliseconds
I0429 08:07:26.786158 135 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I0429 08:07:26.788162 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles?timeout=10s 201 Created in 1 milliseconds
I0429 08:07:26.790140 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings?timeout=10s 201 Created in 1 milliseconds
I0429 08:07:26.790241 135 kubeletfinalize.go:90] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0429 08:07:26.790488 135 loader.go:373] Config loaded from file: /etc/kubernetes/kubelet.conf
I0429 08:07:26.790717 135 kubeletfinalize.go:134] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
I0429 08:07:26.926544 135 round_trippers.go:553] GET https://kind-control-plane:6443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns 200 OK in 2 milliseconds
I0429 08:07:26.930089 135 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/coredns?timeout=10s 404 Not Found in 1 milliseconds
I0429 08:07:26.932878 135 round_trippers.go:553] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 2 milliseconds
I0429 08:07:26.935097 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 1 milliseconds
I0429 08:07:26.937274 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 1 milliseconds
I0429 08:07:26.939560 135 round_trippers.go:553] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 2 milliseconds
I0429 08:07:26.946747 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/apps/v1/namespaces/kube-system/deployments?timeout=10s 201 Created in 6 milliseconds
I0429 08:07:26.951967 135 round_trippers.go:553] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/services?timeout=10s 201 Created in 4 milliseconds
[addons] Applied essential addon: CoreDNS
I0429 08:07:26.955728 135 round_trippers.go:553] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 2 milliseconds
I0429 08:07:26.959698 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/apps/v1/namespaces/kube-system/daemonsets?timeout=10s 201 Created in 3 milliseconds
I0429 08:07:26.968211 135 round_trippers.go:553] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 3 milliseconds
I0429 08:07:26.970298 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 1 milliseconds
I0429 08:07:26.978224 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 2 milliseconds
I0429 08:07:27.175175 135 request.go:628] Waited for 196.734625ms due to client-side throttling, not priority and fairness, request: POST:https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s
I0429 08:07:27.177065 135 round_trippers.go:553] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 1 milliseconds
[addons] Applied essential addon: kube-proxy
I0429 08:07:27.177411 135 loader.go:373] Config loaded from file: /etc/kubernetes/admin.conf
I0429 08:07:27.177698 135 loader.go:373] Config loaded from file: /etc/kubernetes/admin.conf

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join kind-control-plane:6443 --token \
--discovery-token-ca-cert-hash sha256:0e7645f3f223d6bf59519b6f28c98dc12f93af56637a8e70f613117f834802c8 \
--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join kind-control-plane:6443 --token \
--discovery-token-ca-cert-hash sha256:0e7645f3f223d6bf59519b6f28c98dc12f93af56637a8e70f613117f834802c8
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
DEBUG: kubeadmjoin/join.go:133] I0429 08:07:30.050482 135 join.go:412] [preflight] found NodeName empty; using OS hostname as NodeName
I0429 08:07:30.050537 135 joinconfiguration.go:76] loading configuration from "/kind/kubeadm.conf"
I0429 08:07:30.050923 135 controlplaneprepare.go:225] [download-certs] Skipping certs download
I0429 08:07:30.050947 135 join.go:529] [preflight] Discovering cluster-info
I0429 08:07:30.050952 135 token.go:80] [discovery] Created cluster-info discovery client, requesting info from "kind-control-plane:6443"
I0429 08:07:30.058360 135 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 6 milliseconds
I0429 08:07:30.058522 135 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "abcdef", will try again
I0429 08:07:35.762641 135 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 2 milliseconds
I0429 08:07:35.762896 135 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "abcdef", will try again
I0429 08:07:42.009998 135 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 4 milliseconds
I0429 08:07:42.010916 135 token.go:105] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "kind-control-plane:6443"
I0429 08:07:42.010950 135 discovery.go:52] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I0429 08:07:42.010964 135 join.go:543] [preflight] Fetching init configuration
I0429 08:07:42.010970 135 join.go:589] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I0429 08:07:42.019661 135 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s 200 OK in 8 milliseconds
I0429 08:07:42.021846 135 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy?timeout=10s 200 OK in 1 milliseconds
I0429 08:07:42.022721 135 kubelet.go:74] attempting to download the KubeletConfiguration from ConfigMap "kubelet-config"
I0429 08:07:42.023904 135 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config?timeout=10s 200 OK in 1 milliseconds
I0429 08:07:42.025071 135 interface.go:432] Looking for default routes with IPv4 addresses
I0429 08:07:42.025083 135 interface.go:437] Default route transits interface "eth0"
I0429 08:07:42.025166 135 interface.go:209] Interface eth0 is up
I0429 08:07:42.025209 135 interface.go:257] Interface "eth0" has 3 addresses :[172.18.0.5/16 fc00:f853:ccd:e793::5/64 fe80::42:acff:fe12:5/64].
I0429 08:07:42.025226 135 interface.go:224] Checking addr 172.18.0.5/16.
I0429 08:07:42.025231 135 interface.go:231] IP found 172.18.0.5
I0429 08:07:42.025244 135 interface.go:263] Found valid IPv4 address 172.18.0.5 for interface "eth0".
I0429 08:07:42.025248 135 interface.go:443] Found active IP 172.18.0.5
I0429 08:07:42.036005 135 common.go:128] WARNING: tolerating control plane version v1.28.0-alpha.0.530+d8bdddcab42932-dirty as a pre-release version
I0429 08:07:42.036029 135 kubelet.go:121] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I0429 08:07:42.036504 135 kubelet.go:136] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt
I0429 08:07:42.036756 135 loader.go:373] Config loaded from file: /etc/kubernetes/bootstrap-kubelet.conf
I0429 08:07:42.036940 135 kubelet.go:157] [kubelet-start] Checking for an existing Node in the cluster with name "kind-worker" and status "Ready"
I0429 08:07:42.038536 135 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/nodes/kind-worker?timeout=10s 404 Not Found in 1 milliseconds
I0429 08:07:42.038739 135 kubelet.go:172] [kubelet-start] Stopping the kubelet
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0429 08:07:43.129088 135 loader.go:373] Config loaded from file: /etc/kubernetes/kubelet.conf
I0429 08:07:43.129611 135 cert_rotation.go:137] Starting client certificate rotation controller
I0429 08:07:43.129684 135 loader.go:373] Config loaded from file: /etc/kubernetes/kubelet.conf
I0429 08:07:43.129820 135 kubelet.go:220] [kubelet-start] preserving the crisocket information for the node
I0429 08:07:43.129848 135 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///run/containerd/containerd.sock" to the Node API object "kind-worker" as an annotation
I0429 08:07:43.635180 135 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/nodes/kind-worker?timeout=10s 404 Not Found in 4 milliseconds
I0429 08:07:44.132681 135 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/nodes/kind-worker?timeout=10s 200 OK in 1 milliseconds
I0429 08:07:44.136895 135 round_trippers.go:553] PATCH https://kind-control-plane:6443/api/v1/nodes/kind-worker?timeout=10s 200 OK in 3 milliseconds

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
DEBUG: kubeadmjoin/join.go:133] I0429 08:07:30.050342 133 join.go:412] [preflight] found NodeName empty; using OS hostname as NodeName
I0429 08:07:30.050414 133 joinconfiguration.go:76] loading configuration from "/kind/kubeadm.conf"
I0429 08:07:30.050917 133 controlplaneprepare.go:225] [download-certs] Skipping certs download
I0429 08:07:30.050930 133 join.go:529] [preflight] Discovering cluster-info
I0429 08:07:30.050937 133 token.go:80] [discovery] Created cluster-info discovery client, requesting info from "kind-control-plane:6443"
I0429 08:07:30.058360 133 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 6 milliseconds
I0429 08:07:30.058522 133 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "abcdef", will try again
I0429 08:07:36.512197 133 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 1 milliseconds
I0429 08:07:36.512330 133 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "abcdef", will try again
I0429 08:07:42.040704 133 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 1 milliseconds
I0429 08:07:42.041639 133 token.go:105] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "kind-control-plane:6443"
I0429 08:07:42.041750 133 discovery.go:52] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I0429 08:07:42.041837 133 join.go:543] [preflight] Fetching init configuration
I0429 08:07:42.041877 133 join.go:589] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I0429 08:07:42.047750 133 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s 200 OK in 5 milliseconds
I0429 08:07:42.049491 133 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy?timeout=10s 200 OK in 1 milliseconds
I0429 08:07:42.050134 133 kubelet.go:74] attempting to download the KubeletConfiguration from ConfigMap "kubelet-config"
I0429 08:07:42.051420 133 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config?timeout=10s 200 OK in 1 milliseconds
I0429 08:07:42.052380 133 interface.go:432] Looking for default routes with IPv4 addresses
I0429 08:07:42.052391 133 interface.go:437] Default route transits interface "eth0"
I0429 08:07:42.052465 133 interface.go:209] Interface eth0 is up
I0429 08:07:42.052505 133 interface.go:257] Interface "eth0" has 3 addresses :[172.18.0.4/16 fc00:f853:ccd:e793::4/64 fe80::42:acff:fe12:4/64].
I0429 08:07:42.052520 133 interface.go:224] Checking addr 172.18.0.4/16.
I0429 08:07:42.052524 133 interface.go:231] IP found 172.18.0.4
I0429 08:07:42.052531 133 interface.go:263] Found valid IPv4 address 172.18.0.4 for interface "eth0".
I0429 08:07:42.052535 133 interface.go:443] Found active IP 172.18.0.4
I0429 08:07:42.056901 133 common.go:128] WARNING: tolerating control plane version v1.28.0-alpha.0.530+d8bdddcab42932-dirty as a pre-release version
I0429 08:07:42.056926 133 kubelet.go:121] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I0429 08:07:42.057374 133 kubelet.go:136] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt
I0429 08:07:42.057616 133 loader.go:373] Config loaded from file: /etc/kubernetes/bootstrap-kubelet.conf
I0429 08:07:42.057802 133 kubelet.go:157] [kubelet-start] Checking for an existing Node in the cluster with name "kind-worker2" and status "Ready"
I0429 08:07:42.059305 133 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/nodes/kind-worker2?timeout=10s 404 Not Found in 1 milliseconds
I0429 08:07:42.059468 133 kubelet.go:172] [kubelet-start] Stopping the kubelet
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0429 08:07:43.134993 133 loader.go:373] Config loaded from file: /etc/kubernetes/kubelet.conf
I0429 08:07:43.135354 133 cert_rotation.go:137] Starting client certificate rotation controller
I0429 08:07:43.135454 133 loader.go:373] Config loaded from file: /etc/kubernetes/kubelet.conf
I0429 08:07:43.135626 133 kubelet.go:220] [kubelet-start] preserving the crisocket information for the node
I0429 08:07:43.135651 133 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///run/containerd/containerd.sock" to the Node API object "kind-worker2" as an annotation
I0429 08:07:43.640349 133 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/nodes/kind-worker2?timeout=10s 404 Not Found in 4 milliseconds
I0429 08:07:44.137322 133 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/nodes/kind-worker2?timeout=10s 200 OK in 1 milliseconds
I0429 08:07:44.141827 133 round_trippers.go:553] PATCH https://kind-control-plane:6443/api/v1/nodes/kind-worker2?timeout=10s 200 OK in 3 milliseconds

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
chumble2TR91:kubernetes chumble$ kubectl cluster-info --context kind-kind
Kubernetes control plane is running at https://127.0.0.1:56623
CoreDNS is running at https://127.0.0.1:56623/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Inspect the cluster details:


chumble2TR91:kubernetes chumble$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:36:36Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"darwin/arm64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"28+", GitVersion:"v1.28.0-alpha.0.530+d8bdddcab42932-dirty", GitCommit:"d8bdddcab4293284ce9f11b12f37fb827fc56f7c", GitTreeState:"dirty", BuildDate:"2023-04-29T08:02:09Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"linux/arm64"}
WARNING: version difference between client (1.25) and server (1.28) exceeds the supported minor version skew of +/-1

chumble2TR91:kubernetes chumble$ /usr/local/bin/kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:36:36Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"darwin/arm64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"28+", GitVersion:"v1.28.0-alpha.0.530+d8bdddcab42932-dirty", GitCommit:"d8bdddcab4293284ce9f11b12f37fb827fc56f7c", GitTreeState:"dirty", BuildDate:"2023-04-29T08:02:09Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"linux/arm64"}
WARNING: version difference between client (1.25) and server (1.28) exceeds the supported minor version skew of +/-1

cumble2TR91:kubernetes chumble$ _output/bin/kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"28+", GitVersion:"v1.28.0-alpha.0.476+81076233e71cf5-dirty", GitCommit:"81076233e71cf5ffa7e8041be4289fd54bde1527", GitTreeState:"dirty", BuildDate:"2023-04-29T05:20:40Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"darwin/arm64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"28+", GitVersion:"v1.28.0-alpha.0.530+d8bdddcab42932-dirty", GitCommit:"d8bdddcab4293284ce9f11b12f37fb827fc56f7c", GitTreeState:"dirty", BuildDate:"2023-04-29T08:02:09Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"linux/arm64"}

chumble2TR91:kubernetes chumble$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane 23m v1.28.0-alpha.0.530+d8bdddcab42932-dirty
kind-worker Ready 22m v1.28.0-alpha.0.530+d8bdddcab42932-dirty
kind-worker2 Ready 22m v1.28.0-alpha.0.530+d8bdddcab42932-dirty

chumble2TR91:kubernetes chumble$ make WHAT="test/e2e/e2e.test"
go version go1.20.3 darwin/arm64
+++ [0429 14:01:30] Setting GOMAXPROCS: 10
+++ [0429 14:01:32] Building go targets for darwin/arm64
k8s.io/kubernetes/test/e2e/e2e.test (test)

chumble2TR91:kubernetes chumble$ ll test/e2e/e2e
e2e-example-config.json e2e.go e2e_test.go

chumble2TR91:kubernetes chumble$ ls -lh _output/bin/e2e.test
-rwxr-xr-x@ 1 chumble staff 142M Apr 29 14:02 _output/bin/e2e.test

chumble2TR91:kubernetes chumble$ ./_output/bin/e2e.test -context kind-kind -ginkgo.focus="\[sig-network\].*Conformance" -num-nodes 2
Apr 29 14:04:50.815: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used.
I0429 14:04:50.816778 89284 e2e.go:117] Starting e2e run "4f955303-8f34-42ee-aed0-c2ba14b657e0" on Ginkgo node 1
Running Suite: Kubernetes e2e suite - /Users/chumble/gospace/src/k8s.io/kubernetes
==================================================================================
Random Seed: 1682757290 - will randomize all specs

Will run 40 of 7340 specs
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS

Ref #

https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md#running-conformance-tests

Understanding Advanced Pod Scheduling in Kubernetes: Pod Disruption Budgets, Runtime Classes, and Priority Classes

Kubernetes is a powerful orchestration tool for managing containerized applications. While basic scheduling policies like taints, tolerations, and node affinity are well-known, advanced features like Pod Disruption Budgets (PDBs), Runtime Classes, and Priority Classes provide finer control over pod scheduling and lifecycle management. In this blog post, we’ll dive into these advanced concepts, explore why and when to use them, and discuss their advantages over other scheduling policies.

1. Pod Disruption Budgets (PDBs)

What is a Pod Disruption Budget?

A Pod Disruption Budget (PDB) is a Kubernetes resource that allows you to specify the minimum number or percentage of pods that must remain available during voluntary disruptions. Voluntary disruptions include actions like draining a node for maintenance, upgrading a cluster, or scaling down a deployment.

Why Use Pod Disruption Budgets?

Ensure High Availability: PDBs prevent too many pods from being terminated simultaneously, ensuring that your application remains available during disruptions.

Control Over Disruptions: They give you fine-grained control over how many pods can be disrupted, balancing between application availability and cluster maintenance.

When to Use Pod Disruption Budgets?

Stateful Applications: For stateful applications like databases, where losing too many pods can lead to data inconsistency or downtime.

Critical Workloads: For mission-critical workloads where even a small amount of downtime is unacceptable.

Rolling Updates: When performing rolling updates or cluster upgrades, PDBs ensure that a minimum number of pods are always running.

Advantages Over Other Scheduling Policies

Focused on Disruptions: Unlike taints and tolerations, which focus on pod placement, PDBs focus on pod availability during disruptions.

Dynamic Control: PDBs work dynamically with cluster operations, ensuring that disruptions don’t violate the specified budget.

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: my-app-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: my-app

This PDB ensures that at least 2 pods of the my-app application are always available during voluntary disruptions.

2. Runtime Classes

What is a Runtime Class?

A Runtime Class is a Kubernetes feature that allows you to select the container runtime for your pods. Different runtimes can provide varying levels of performance, security, or compatibility.

Why Use Runtime Classes?

Specialized Runtimes: Use runtimes optimized for specific workloads, such as high-performance computing or secure sandboxing.

Isolation and Security: Choose runtimes like gVisor or Kata Containers for enhanced security and isolation.

Compatibility: Run workloads that require specific runtime environments.

When to Use Runtime Classes?

Security-Sensitive Workloads: For workloads that require strong isolation, such as multi-tenant environments.

Performance-Critical Applications: For applications that benefit from lightweight or high-performance runtimes.

Legacy Workloads: For workloads that require compatibility with specific runtime environments.

Advantages Over Other Scheduling Policies

Runtime Flexibility: Unlike node affinity or taints, which focus on node selection, Runtime Classes allow you to choose the container runtime itself.

Enhanced Security: Provides an additional layer of security by isolating workloads at the runtime level.

apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsc
---
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
runtimeClassName: gvisor
containers:
- name: secure-container
image: nginx

This example uses the gVisor runtime for enhanced security.

3. Priority Classes

What is a Priority Class?

A Priority Class is a Kubernetes resource that allows you to assign priority levels to pods. Higher-priority pods are scheduled and evicted less frequently than lower-priority pods.

Why Use Priority Classes?

Critical Workloads: Ensure that critical workloads are scheduled and run before less important ones.

Resource Guarantees: Prevent lower-priority pods from starving higher-priority pods of resources.

Eviction Control: Control the order in which pods are evicted during resource contention.

When to Use Priority Classes?

Mixed Workloads: In clusters running both critical and non-critical workloads.

Resource-Intensive Applications: For applications that require guaranteed access to resources.

Multi-Tenant Environments: To prioritize workloads from different tenants or teams.

Advantages Over Other Scheduling Policies

Priority-Based Scheduling: Unlike taints and tolerations, which focus on node selection, Priority Classes focus on the importance of the pod itself.

Eviction Control: Provides control over pod eviction, ensuring that critical pods are not evicted unnecessarily.

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-priority
value: 1000000
globalDefault: false
description: "This priority class is for critical workloads."
---
apiVersion: v1
kind: Pod
metadata:
name: critical-pod
spec:
priorityClassName: high-priority
containers:
- name: critical-container
image: nginx

This example assigns a high priority to the critical-pod, ensuring it gets scheduled and evicted less frequently.

Comparing Advanced Scheduling Policies

FeatureFocus Area
Focus Area
Use Case Example
Advantage Over Basic Policies
Pod Disruption Budget
Pod availability during disruptions
Ensuring database pods remain available
Dynamic control over disruptions
Runtime Class
Container runtime selection
Running secure or high-performance pods
Flexibility in runtime environments
Priority Class
Pod scheduling and eviction order
Prioritizing critical workloads
Ensures resource guarantees

Conclusion

Advanced scheduling policies like Pod Disruption Budgets, Runtime Classes, and Priority Classes provide Kubernetes users with powerful tools to manage pod lifecycle, runtime environments, and resource allocation. By understanding when and why to use these features, you can optimize your cluster for high availability, security, and performance.

While basic scheduling policies like taints and tolerations are essential for pod placement, these advanced features address specific challenges like disruption management, runtime isolation, and workload prioritization. Incorporating them into your Kubernetes strategy can significantly enhance the reliability and efficiency of your applications.

Whether you’re running stateful applications, security-sensitive workloads, or mixed criticality environments, these advanced scheduling policies offer the flexibility and control you need to succeed in a production-grade Kubernetes setup.

Understanding Kubernetes Resource Management: A Beginner’s Guide

Kubernetes has become the go-to platform for managing containerized applications, but to truly harness its power, you need to understand its resource model. In this blog post, we’ll break down how Kubernetes handles resources like CPU and memory, how to configure them, and how to monitor their usage effectively.

What is the Kubernetes Resource Model?

At its core, Kubernetes uses a resource model based on requests and limits. These are the building blocks for managing compute resources like CPU and memory for your applications.

– Requests: This is the minimum amount of resources (CPU/memory) that a container needs to run. Kubernetes uses this information to schedule Pods on nodes with sufficient resources.
– Limits: This is the maximum amount of resources a container can use. If a container exceeds its memory limit, it may be terminated. If it exceeds its CPU limit, it will be throttled.

Key Resource Types in Kubernetes
– CPU: Measured in CPU units (e.g., `0.5` for half a CPU core or `1000m` for 1000 millicores).
– Memory: Measured in bytes (e.g., `512Mi` for 512 mebibytes or `2Gi` for 2 gibibytes).
– Ephemeral Storage: Temporary disk space used by containers.
– Extended Resources: Custom resources like GPUs or other hardware accelerators.

How to Configure Resources in Kubernetes

Configuring resources in Kubernetes is straightforward, and there are several ways to do it depending on your needs.

1. Resource Requests and Limits in Pod Specs
You can define resource requests and limits directly in your Pod manifest. This is the most common way to specify resource requirements for individual containers.

Here’s an example:
“`yaml
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
– name: my-container
image: nginx
resources:
requests:
memory: “64Mi”
cpu: “250m”
limits:
memory: “128Mi”
cpu: “500m”
“`
In this example, the container requests 64Mi of memory and 250m of CPU, with limits set to 128Mi of memory and 500m of CPU.

# 2. Namespace-Level Resource Quotas
If you’re working in a multi-tenant environment, you might want to limit resource usage at the namespace level. This is where ResourceQuotas come in.

Example:
“`yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: my-resource-quota
namespace: my-namespace
spec:
hard:
requests.cpu: “2”
requests.memory: “2Gi”
limits.cpu: “4”
limits.memory: “4Gi”
“`
This ResourceQuota ensures that all Pods in the `my-namespace` namespace collectively don’t exceed the specified limits.

# 3. Limit Ranges
To enforce default resource requests and limits for all Pods in a namespace, you can use LimitRanges.

Example:
“`yaml
apiVersion: v1
kind: LimitRange
metadata:
name: my-limit-range
namespace: my-namespace
spec:
limits:
– default:
cpu: “500m”
memory: “512Mi”
defaultRequest:
cpu: “250m”
memory: “256Mi”
type: Container
“`
This ensures that every container in the namespace gets default resource requests and limits if they aren’t explicitly defined.

# 4. Horizontal Pod Autoscaler (HPA)
Kubernetes can automatically scale your applications based on resource usage using the Horizontal Pod Autoscaler (HPA). For example, you can configure HPA to scale a Deployment based on CPU utilization.

Example:
“`yaml
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deployment
minReplicas: 1
maxReplicas: 10
metrics:
– type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
“`
This configuration ensures that the number of Pods scales up or down to maintain an average CPU utilization of 50%.

How to Monitor Resource Usage in Kubernetes

Monitoring is crucial to ensure your applications are running efficiently and to avoid resource exhaustion. Kubernetes provides several tools to help you keep an eye on resource usage.

# 1. Metrics Server
The Metrics Server collects resource usage data (CPU and memory) from Kubernetes nodes and Pods. It’s used by tools like `kubectl top` and the Horizontal Pod Autoscaler (HPA).

To install the Metrics Server:
“`bash
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
“`

Once installed, you can view resource usage with:
“`bash
kubectl top nodes
kubectl top pods
“`

# 2. Kubernetes Dashboard
The Kubernetes Dashboard provides a user-friendly interface to view resource usage and manage your workloads.

# 3. Prometheus and Grafana
For advanced monitoring, you can use Prometheus to scrape metrics from Kubernetes components and Grafana to visualize them.

To set up Prometheus and Grafana using Helm:
“`bash
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/kube-prometheus-stack
“`

# 4. cAdvisor
cAdvisor is integrated into the Kubelet and provides container-level resource usage metrics.

# 5. Third-Party Monitoring Tools
Tools like Datadog, Sysdig, and New Relic offer advanced monitoring and alerting capabilities for Kubernetes clusters.

How Kubernetes Manages Resources

Here’s a quick overview of how Kubernetes handles resource allocation:

1. Scheduling: When you create a Pod, the Kubernetes scheduler evaluates its resource requests and assigns it to a node with sufficient resources.
2. Enforcement: The Kubelet on each node enforces resource limits using cgroups. If a container exceeds its memory limit, it may be terminated. If it exceeds its CPU limit, it will be throttled.
3. Autoscaling: The Horizontal Pod Autoscaler (HPA) adjusts the number of Pods based on resource utilization, while the Vertical Pod Autoscaler (VPA) adjusts resource requests and limits for individual Pods.
4. Quotas and Limits: ResourceQuota and LimitRange objects enforce namespace-level constraints and defaults.

Best Practices for Kubernetes Resource Management

– Set realistic requests and limits based on your application’s needs.
– Use ResourceQuotas to prevent overcommitment in namespaces.
– Regularly monitor resource usage and adjust configurations as needed.
– Use HPA and VPA for dynamic scaling.
– Test your workloads under load to identify resource bottlenecks.

By understanding and effectively managing Kubernetes resources, you can ensure that your applications run smoothly and efficiently. Whether you’re running a small application or a large-scale system, mastering Kubernetes resource management is key to success.

A Comprehensive Guide to Upgrading Kubernetes Nodes: Best Practices, Techniques, and Internal Mechanics

Kubernetes has become the de facto standard for container orchestration, enabling organizations to manage containerized applications at scale. However, as with any complex system, keeping your Kubernetes cluster up-to-date is critical for security, performance, and access to new features. Upgrading Kubernetes nodes—both control plane and worker nodes—requires careful planning and execution to ensure minimal disruption to your workloads.

In this blog, we’ll dive deep into the process of upgrading Kubernetes nodes, covering prerequisites, cordoning techniques, internal mechanics, and best practices. We’ll also explore how pod priorities and affinities play a role during upgrades, and the order in which control plane and worker node upgrades should be performed.

Prerequisites for Upgrading Kubernetes Nodes

Before diving into the upgrade process, ensure the following prerequisites are met:

1. Backup Your Cluster: Always take a backup of your cluster’s state, including etcd data, configurations, and workloads. Tools like Velero can help with this.
2. Check Kubernetes Version Compatibility: Ensure the target version is compatible with your current version. Kubernetes supports upgrades from the previous two minor versions.
3. Review Release Notes: Familiarize yourself with the release notes of the target version to understand new features, deprecations, and potential breaking changes.
4. Update kubectl: Ensure your `kubectl` CLI tool is updated to match the target Kubernetes version.
5. Drain Workloads: Plan to drain workloads from nodes before upgrading them to avoid disruptions.
6. Test in a Staging Environment: If possible, test the upgrade process in a non-production environment to identify potential issues.

Understanding Cordon and Drain Techniques

# What is Cordon?

Cordoning a node marks it as unschedulable, preventing new pods from being scheduled on it. This is a critical step before upgrading a node to ensure no new workloads are assigned to it during the upgrade process.

What Happens Internally When You Cordon a Node?
– The Kubernetes scheduler updates its internal state to exclude the node from scheduling decisions.
– Existing pods on the node continue to run unless explicitly drained.
– The node’s status in the Kubernetes API is updated to reflect its unschedulable state.

How to Cordon a Node
“`bash
kubectl cordon
“`

# What is Drain?

Draining a node gracefully evicts all running pods from the node. This ensures that workloads are rescheduled on other nodes before the upgrade begins.

How to Drain a Node
“`bash
kubectl drain –ignore-daemonsets –delete-emptydir-data
“`
– `–ignore-daemonsets`: DaemonSets are typically excluded from draining since they are tied to specific nodes.
– `–delete-emptydir-data`: Deletes data stored in emptyDir volumes, which are ephemeral.

Order of Upgrades: Control Plane vs. Worker Nodes

# Control Plane Upgrades

The control plane components (API server, scheduler, controller manager, etcd) should be upgraded first. Here’s the typical order:

1. Upgrade etcd: As the backbone of Kubernetes, etcd stores the cluster’s state. Ensure it’s upgraded first.
2. Upgrade kube-apiserver: The API server is the front end for the control plane and must be compatible with the upgraded etcd.
3. Upgrade kube-controller-manager and kube-scheduler: These components should be upgraded next.
4. Upgrade cloud-controller-manager (if applicable): For clusters running in cloud environments.

# Worker Node Upgrades

Once the control plane is upgraded, proceed with worker nodes. Worker nodes can be upgraded in parallel or sequentially, depending on your cluster size and workload requirements.

Pod Priorities and Affinities During Upgrades

# Pod Priorities

Pod PriorityClass allows you to define the importance of pods. During upgrades, higher-priority pods are rescheduled first, ensuring critical workloads are not disrupted.

– Preemption: If resources are scarce, lower-priority pods may be preempted to make room for higher-priority pods.
– Best Practice: Assign appropriate priorities to your workloads to ensure critical applications are prioritized during upgrades.

# Pod Affinities and Anti-Affinities

Pod affinities and anti-affinities influence how pods are scheduled relative to each other. During upgrades:

– Pod Affinity: Ensures related pods are scheduled together, which can help maintain application performance.
– Pod Anti-Affinity: Prevents pods from being scheduled on the same node, improving fault tolerance.

Best Practices
– Use anti-affinity rules for critical workloads to ensure they are spread across multiple nodes.
– Leverage affinities to maintain application performance and reduce latency.

Best Practices for Upgrading Kubernetes Nodes

1. Follow a Rolling Upgrade Strategy: Upgrade nodes one at a time to minimize downtime and ensure workloads are rescheduled smoothly.
2. Monitor Cluster Health: Use tools like Prometheus and Grafana to monitor cluster health during the upgrade process.
3. Use Automation Tools: Tools like `kubeadm`, `kops`, or managed Kubernetes services (e.g., GKE, EKS, AKS) can simplify the upgrade process.
4. Test Upgrades in a Staging Environment: Always test upgrades in a non-production environment to identify potential issues.
5. Communicate with Stakeholders: Inform your team and stakeholders about the upgrade schedule and potential downtime.
6. Plan for Rollbacks: Have a rollback plan in case the upgrade encounters issues. This includes backing up etcd and having a tested rollback procedure.

Conclusion

Upgrading Kubernetes nodes is a critical task that requires careful planning and execution. By understanding the prerequisites, cordoning and draining techniques, and the internal mechanics of Kubernetes, you can ensure a smooth upgrade process. Additionally, leveraging pod priorities and affinities can help minimize disruptions to your workloads.

Remember to follow best practices, such as testing upgrades in a staging environment, monitoring cluster health, and using automation tools. With the right approach, you can keep your Kubernetes cluster secure, performant, and up-to-date with the latest features.

Happy upgrading! 🚀

Further Reading:
– [Kubernetes Official Documentation on Upgrades](https://kubernetes.io/docs/tasks/administer-cluster/cluster-upgrade/)
– [Velero: Backup and Restore Kubernetes Clusters](https://velero.io/)
– [Kubernetes Pod Priority and Preemption](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/

Mastering Pod Allocation in Kubernetes: NodeSelectors, Taints, Tolerations, and More

Kubernetes is a powerful platform for managing containerized workloads, but one of its most critical features is Pod scheduling. How Pods are allocated to nodes can significantly impact the performance, reliability, and efficiency of your applications. In this blog post, we’ll dive deep into the mechanisms Kubernetes provides for Pod scheduling, including NodeSelectors, Taints and Tolerations, Affinity and Anti-Affinity Rules, and more. We’ll also explore best practices and when to use each feature.

How Pod Allocation Works in Kubernetes

When you create a Pod, Kubernetes needs to decide which node in the cluster should run it. This process is called scheduling. By default, the Kubernetes scheduler uses a set of rules to determine the best node for a Pod. However, you can influence this decision using advanced features like NodeSelectors, Taints, Tolerations, and Affinity Rules.

Let’s break down each of these mechanisms and understand how they work.

1. NodeSelectors: Simple Node Selection

NodeSelectors are the simplest way to influence Pod scheduling. They allow you to specify a set of key-value pairs (labels) that a node must have for the Pod to be scheduled on it.

# How to Use NodeSelectors
1. Label your nodes:
“`bash
kubectl label nodes disktype=ssd
“`
2. Add a `nodeSelector` to your Pod spec:
“`yaml
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
– name: my-container
image: nginx
nodeSelector:
disktype: ssd
“`

# When to Use NodeSelectors
– When you want to schedule Pods on specific nodes with certain characteristics (e.g., SSD storage, GPU availability).
– For simple, static scheduling requirements.

2. Taints and Tolerations: Keeping Pods Away from Nodes

Taints and Tolerations work together to ensure that Pods are not scheduled on inappropriate nodes. A taint is applied to a node to repel Pods, and a toleration is applied to a Pod to allow it to run on a tainted node.

# How to Use Taints and Tolerations
1. Taint a node:
“`bash
kubectl taint nodes key=value:NoSchedule
“`
This prevents Pods without a matching toleration from being scheduled on the node.

2. Add a toleration to your Pod spec:
“`yaml
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
– name: my-container
image: nginx
tolerations:
– key: “key”
operator: “Equal”
value: “value”
effect: “NoSchedule”
“`

# When to Use Taints and Tolerations
– To reserve nodes for specific workloads (e.g., GPU nodes for machine learning workloads).
– To prevent general-purpose workloads from running on specialized nodes.
– To cordon off nodes for maintenance or upgrades.

3. Affinity and Anti-Affinity: Advanced Scheduling Rules

Affinity and Anti-Affinity rules allow you to define more complex scheduling requirements. These rules can be based on node labels, Pod labels, or even the presence of other Pods on a node.

# Types of Affinity
– Node Affinity: Similar to NodeSelectors but more expressive. It allows you to specify hard or soft requirements for node labels.
– Pod Affinity: Ensures that Pods are scheduled on the same nodes as other Pods with specific labels.
– Pod Anti-Affinity: Ensures that Pods are not scheduled on the same nodes as other Pods with specific labels.

# How to Use Affinity Rules
Example of Node Affinity:

“`yaml
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
– name: my-container
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
– matchExpressions:
– key: disktype
operator: In
values:
– ssd
“`

Example of Pod Anti-Affinity:
“`yaml
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
– name: my-container
image: nginx
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
– labelSelector:
matchExpressions:
– key: app
operator: In
values:
– my-app
topologyKey: “kubernetes.io/hostname”
“`

# When to Use Affinity and Anti-Affinity
– Node Affinity: For advanced node selection requirements.
– Pod Affinity: To co-locate Pods that need to communicate frequently (e.g., frontend and backend services).
– Pod Anti-Affinity: To distribute Pods across nodes for high availability (e.g., avoiding single points of failure).

4. Mutually Exclusive Features

Some of these scheduling mechanisms are mutually exclusive or overlap in functionality. Here’s a quick guide:

– NodeSelectors vs. Node Affinity: NodeSelectors are simpler but less expressive than Node Affinity. Use NodeSelectors for basic requirements and Node Affinity for advanced rules.
– Taints/Tolerations vs. Node Affinity: Taints and Tolerations are used to repel Pods from nodes, while Node Affinity is used to attract Pods to nodes. They can be used together for fine-grained control.
– Pod Affinity vs. Pod Anti-Affinity: These are complementary. Use Pod Affinity to group Pods and Pod Anti-Affinity to spread them out.

5. Other Pod Scheduling Options

Kubernetes also provides additional scheduling options:

– Manual Scheduling: Assign Pods to specific nodes using the `nodeName` field in the Pod spec.
– DaemonSets: Ensure that a copy of a Pod runs on all (or specific) nodes in the cluster.
– Priority and Preemption: Assign priorities to Pods to influence scheduling decisions.

Best Practices for Pod Scheduling

1. Use NodeSelectors for Simple Requirements: If you only need to match a few node labels, NodeSelectors are the easiest option.
2. Use Taints and Tolerations for Node Isolation: Reserve specialized nodes for specific workloads or maintenance.
3. Use Affinity and Anti-Affinity for Advanced Scenarios: Define complex scheduling rules to optimize performance and availability.
4. Avoid Overlapping Rules: Ensure that your scheduling rules don’t conflict, as this can lead to unpredictable behavior.
5. Test Your Configurations: Always test your scheduling rules in a staging environment before deploying to production.
6. Monitor and Adjust: Use monitoring tools to track Pod placement and adjust your rules as needed.

Conclusion

Kubernetes provides a rich set of tools for controlling Pod allocation, from simple NodeSelectors to advanced Affinity and Anti-Affinity rules. By understanding these mechanisms and using them effectively, you can optimize the performance, reliability, and efficiency of your applications.

Whether you’re running a small cluster or a large-scale deployment, mastering Pod scheduling is key to getting the most out of Kubernetes. So, start experimenting with these features and see how they can help you achieve your goals!

Building community around – Kubernetes Kerala Meetup

I have been trying to get this group meet and know the vibe we have in Kerala,India on Kubernetes and Cloud Native Eco System. The co-ordination was done by a meetup group https://www.meetup.com/kubernetes-openshift-kerala-meetup/ which got birth few years ago! There had small gathering and discussions on small groups to conduct some events but pandemic came in as a blocker for in-person events planned and I was stuck. I was hesitant to have a virtual event especially when we are in phase 1 of building a community. However at July 2022, I thought about revamping the group and resuming the attempt to build a community via Virtual Meetup . Finally it was announced here https://www.meetup.com/kubernetes-openshift-kerala-meetup/events/287466261/

I was assuming a small turnout considering all the scenarios but surprisingly I was wrong. More than 20 people joined and it was a really great event.

We were privileged to have a couple of great speakers available and present in this Event.

Kick started this event on Aug 20th with a Welcome Note from me, followed with a talk on ETCD deep dive from Ranjith Rajaram. This was a great in depth one on ETCD – a core component of kubernetes cluster.
For those who missed the talk can watch it here

After a break, we resumed the event with a talk on Kubernetes 101 from Sreejith Anujan

Both of these talks were well received and we had great feedback about the event/talks at Open forum slot at end of this event.

There is more pulse here locally which I am waiting to ignite. Looking forward to next event planned in September.

If you are around and if you care about knowledge sharing and learning together please join and share with your friends or collegues!

Email: kubernetes.kerala@gmail.com
Forum : https://groups.google.com/forum/#!forum/kubernetes-kerala
Youtube Channel : https://www.youtube.com/channel/UCpdYLCt-lpAZkM3xcEby10Q
Twitter: https://twitter.com/K8sKerala

Reach out to me if you got any questions or if you would like to help on the event by presenting a topic..etc!

Quick unwinding of 2020 & plans for 2021

I thought of unleashing knowledge in this space via a series of articles by the end of last year. However I couldnt spend time on Keyboard due to various things. Eventhough many articles were at draft I couldnt polish/publish those in last few months. Better late than never, I am planning to publish all those articles and keep this space filled with contents. Watch this space for more technical articles. Also, I will respond to the emails or queries received in personal inbox queue too. Apologies for the delayed response.

As a quick recap of ‘2020’, it was a great year for me on every aspect. Really experimented this year with many things which make sense for rest of the life or in a long run. Lot more added to my portfolio (will reveal it later :)) and I am really happy that I spent this year very wisely unlike previous. I wish I could have done that earlier. But I believe in a principle that, everything happens at their own pace and time! so have to accept things in the same way. Some were blessing in disguise! Eventhough Life is all about uncertainty, we have to plan and move on. We have to think, make choices, find opportunities, Take Risk..etc. The plans made at end of 2019, were smoothly executed at 2020. so I repeated the same. With that, if everything goes as planned, 2021 is going to be a rocking year.

In short:

“There is more to life than increasing its speed.” –Mohandas Gandhi

Here it is … “Mastering KVM Virtualization ( Second Edition)” ..

mkvm_second_edition

After block buster first edition (https://amzn.to/2TqNGLr) of `Mastering KVM Virtualization` here it is! We are really happy to announce the availability of our book about KVM virtualization since Oct 23rd of this year! We have been receiving many requests about updating the content to latest and also to add some topics of interest in general . We heard you! and finally its available ! Our effort to share knowledge on KVM virtualization was well received in the market with the first edition of this book. Since the release, lots of emails hit our inbox with the very positive feedback. It also kept us motivated for dedicating extra time on the second edition eventhough the experience tell us that, writing a book is tough especially when you are in a job and you have a family to take care! It need a bold mindset to devote time from the hours you get to relax outside your daily job. We have to admit that, we were assimilated to this with our first book and could manage to roll the second edition over. Hope this edition meet the requirement of knowing more about the KVM virtualization, its internal working mechanism, and how it can be integrated in different deployments and various products. Please let us know how we did on this attempt..

Also I would like to mention that, we have some free copies available for this edition if you are interested, hit me here with the comment if you would like to receive one.

Happy Learning!

Amazon links:
https://amzn.to/3klWvSy
https://amzn.to/37wVQtU

Fedora 32 : Error “Could not resolve host: …” while building docker containers

After upgrading your fedora host ( in my case it was fedora 32) are you facing issues while building docker containers which says “Could not resolve host ….” ? I ran into the same error.

For ex:curl command in the docker container was failing:

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0curl: (6) Could not resolve host: storage.googleapis.com

`dnf install` in the docker container build was failing:

CentOS-8 - AppStream 0.0 B/s | 0 B 01:00
Errors during downloading metadata for repository 'AppStream':
- Curl error (6): Couldn't resolve host name for http://mirrorlist.centos.org/?release=8&arch=x86_64&repo=AppStream&infra=container [Could not resolve host: mirrorlist.centos.org]
Error: Failed to download metadata for repo 'AppStream': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for http://mirrorlist.centos.org/?release=8&arch=x86_64&repo=AppStream&infra=container [Could not resolve host: mirrorlist.centos.org]

I got it resolved after executing below steps:

Find out your `wlan` interface


#ip addr show |grep wlp
3: wlp0s20f3: mtu 1500 qdisc noqueue state UP group default qlen 1000
inet 192.168.68.102/24 brd 192.168.68.255 scope global dynamic noprefixroute wlp0s20f3

Then: enable masquerade on this intereface:


#sudo firewall-cmd --get-zone-of-interface=wlp0s20f3
#sudo firewall-cmd --zone=FedoraWorkstation --add-masquerade --permanent
#sudo firewall-cmd --reload

Once you enabled masquerade on the interface, restart docker service.


#sudo systemctl restart docker

Now try to build your container again!

Ref# http://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/