# Kubernetes installation demo using kubeadm In this demo, we'll install Kubernetes v1.29 using official [kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/) on a 2 node cluster. ## Node configuration | hostname | ip address | Operating System | | ---------- | ------------------ | ---------------- | | k8s-master | 192.168.121.35/24 | Ubuntu 22.04 | | k8s-worker | 192.168.121.133/24 | Ubuntu 22.04 | These 2 nodes needs the following proxy to access the internet: - http_proxy="http://proxy.fake-proxy.com:911" - https_proxy="http://proxy.fake-proxy.com:912" We assume these 2 nodes have been set correctly with the corresponding proxy so we can access the internet both in bash terminal and in apt repository. ## Step 0. Clean up the environment If on any of the above 2 nodes, you have previously installed either Kubernetes, or any other container runtime(i.e. docker, containerd, etc.), please make sure you have clean-up those first. If there is any previous Kubernetes installed on any of these nodes by `kubeadm`, please refer to the listed steps to [tear down the Kubernetes](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#tear-down) first. If there is any previous Kubernetes installed on any of these nodes by `kubespray`, please refer to kubespray doc to [clean up the Kubernetes](https://kubespray.io/#/?id=quick-start) first. Once the Kubernetes is teared down or cleaned up, please run the following command on all the nodes to remove relevant packages: ```bash sudo apt-get purge docker docker-engine docker.io containerd runc containerd.io kubeadm kubectl kubelet sudo rm -r /etc/cni /etc/kubernetes /var/lib/kubelet /var/run/kubernetes /etc/containerd /etc/systemd/system/containerd.service.d /etc/default/kubelet ``` ## Step 1. Install relevant components Run the following on all the nodes: 1. Export proxy settings in bash ```bash export http_proxy="http://proxy.fake-proxy.com:911" export https_proxy="http://proxy.fake-proxy.com:912" # Please make sure you've added all the node's ip addresses into the no_proxy environment variable export no_proxy="localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,192.168.121.35,192.168.121.133" ``` 2. Config system settings ```bash # Disable swap sudo swapoff -a sudo sed -i "s/^\(.* swap \)/#\1/g" /etc/fstab # load kernel module for containerd cat < 7m31s v1.29.6 ``` ## Step 3 (optional) Reset Kubernetes cluster In some cases, you may want to reset the Kubernetes cluster in case some commands after `kubeadm init` fail and you want to reinstall Kubernetes. Please check [tear down the Kubernetes](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#tear-down) for details. Below is the example of how to reset the Kubernetes cluster we just created: On node k8s-master, run the following command: ```bash # drain node k8s-worker1 kubectl drain k8s-worker1 --delete-emptydir-data --force --ignore-daemonsets ``` On node k8s-worker1, run the following command: ```bash sudo kubeadm reset # manually reset iptables/ipvs if necessary ``` On node k8s-master, delete node k8s-worker1: ```bash kubectl delete node k8s-worker1 ``` On node k8s-master, clean up the master node: ```bash sudo kubeadm reset # manually reset iptables/ipvs if necessary ``` ## NOTES 1. By default, normal workload won't be scheduled to nodes in `control-plane` K8S role(i.e. K8S master node). If you want K8S to schedule normal workload to those nodes, please run the following commands on K8S master node: ```bash kubectl taint nodes --all node-role.kubernetes.io/control-plane- kubectl label nodes --all node.kubernetes.io/exclude-from-external-load-balancers- ``` 2. Verifying K8S CNI If you see any issues of the inter-node pod-to-pod communication, please use the following steps to verify that k8s CNI is working correctly: ```bash # Create the K8S manifest file for our debug pods cat < debug-ddfd698ff-z5qpv 1/1 Running 0 91s 10.244.235.199 k8s-master ``` Make sure pod `debug-ddfd698ff-z5qpv` on node k8s-master can ping to the ip address of another pod `debug-ddfd698ff-7gsdc` on node k8s-worker1 to verify east-west traffic is working in K8S. ``` vagrant@k8s-master:~$ kubectl exec debug-ddfd698ff-z5qpv -- ping -c 1 10.244.194.66 PING 10.244.194.66 (10.244.194.66) 56(84) bytes of data. 64 bytes from 10.244.194.66: icmp_seq=1 ttl=62 time=1.76 ms --- 10.244.194.66 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 1.755/1.755/1.755/0.000 ms ``` Make sure pod `debug-ddfd698ff-z5qpv` on node k8s-master can ping to the ip address of another node `k8s-worker1` to verify north-south traffic is working in K8S. ``` vagrant@k8s-master:~$ kubectl exec debug-ddfd698ff-z5qpv -- ping -c 1 192.168.121.133 PING 192.168.121.133 (192.168.121.133) 56(84) bytes of data. 64 bytes from 192.168.121.133: icmp_seq=1 ttl=63 time=1.34 ms --- 192.168.121.133 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 1.339/1.339/1.339/0.000 ms ``` Delete debug pods after use: ```bash kubectl delete -f debug.yaml ```