Deploying Kubernetes Multi-Node Cluster on AWS!
Hello Folks! In this Blog I’ll show the way to setup Kubernetes Multi-Node Cluster manually on AWS Cloud from scratch. So, Guys let’s get going…
What is Kubernetes?
Simply, Kubernetes is a Container Orchestration Tool. Kubernetes is an open-source platform use to manage the containers. It eliminates many of the manual processes involved in deploying and scaling containerized applications.
What is Kubernetes Multi-Node Cluster?
A Kubernetes cluster is a set of node machines for running containerized applications. Multi-cluster is an architecture strategy for deploying a application on or across multiple Kubernetes clusters.
Is there any Pre-requisites?
- Umm!🤔 Yes, You need to have created at least one AWS EC2 instance for acting as Master node and at least one AWS EC2 instance to act as a worker node.
- I have used Amazon Linux 2 AMI when I launched EC2 instances, which is based on centos Linux. This AMI also has some pre-installed libraries that AWS does for you. However, you can use any centos-based system such as RHEL as well. In that case, most of the commands remain the same.
- While launching AWS ec2 instances, make sure that you open all the ports in the related security groups.
- Both the AWS EC2 instances need to be on the same VPC and preferably same availability zone.
- You need to have putty/terminal to connect to your EC2 instances.
Connect both the ec2 instances using two putty terminals. Decide which EC2 instance would be the master/manager and which would be worker/slave. Henceforward we will call them master and worker nodes. There are a few commands that we need to run on the master as well as the worker node.
So, let’s implement the Practical →
On both Master and Worker Node
Step 1: Installing Docker.
Be a root user
sudo su -
After that install Docker using yum command because in AWS yum is pre-configured.
yum install docker -y
Now, Start the services of Docker and make it permanently enable below is the command for same
systemctl enable docker --now
Step 2: Installing Kubernetes components.
Now we can proceed with installing kubelet , kubeadm and kubectl
- Kubelet: Kubelet is an Agent Program that runs on each Node. Kubelet is a Program that will contact to Container Engine via Container Runtime Interface(CRI) for launching the Pods.
- Kubeadm: Kubeadm is a Program that will setup Multi-Node Cluster.
- Kubectl: Kubectl is a User Program which is used by the Users to Connect with the Cluster.
Now if you try to install kubeadm, it’s gonna say “No kubeadm packages available” because the yum is not configured for installing the Kubernetes componentes
Step 3: Creating yum repo for Kubernetes Components.
Here is the yum repo for Kubernetes Components.
Now your yum is been configured you can use the below command to check
yum repolist
Step 4: Installing Kubernetes Components.
Now we can proceed further to install kubeadm, kubectl, and kubelet below is the command for same
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
Now enable the service of kubelet using the below given command
systemctl enable kubelet --now
You can check the status of kubelet using command
systemctl status kubelet
As you can see the status of kubelet service is activating it will take time to start the service. Have little patience…
Step 5: Pulling the Config Images using kubeadm.
As you can see no Docker images are there
We need to pull the required images for kubeadm below is the command for same
kubeadm config images pull
As soon as you run the “kubeadm config images pull” command lots of images are pulled by kubeadm.
Step 6: Configuring the driver of Docker from cgroupfs to systemd.
Now we need to configure the driver of Docker. This can be done by going inside “/etc/docker” folder and create a file “daemon.json” and change the driver from “cgroupfs” to “systemd”. “Systemd” is a driver that Kubernetes supports and driver of Docker is “cgroupfs” so we need to change the driver.
Changing the Driver…
Step 7: Restarting the Docker Service.
After changing the Driver you need to restart the Docker services.
systemctl restart docker
Step 8 : Installing iproute-tc package.
Now Install the iproute-tc package. This software is responsible to make the routings inside the Master-Slave setup.
yum install iproute-tc -y
Step 9: Changing the iptables.
Now we need to change the iptables by going inside the /etc/sysctl.d/k8s.conf and write.
net.bridge.bridge-nf-call-ip6tables=1net.bridge.bridge-nf-call-iptables=1
After then run the below command
sysctl --system
sysctl command is used to modify kernel parameters at runtime. Kubernetes needs to have access to kernel’s IP6 table.
Step 10: Initializing the Cluster.
Only on the Master Node Initialize the Cluster…
kubeadm init --pod-network-cidr=10.240.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem
Here we’ve to set the CIDR and I have used “ — ignore-preflight-errors=NumCPU” and “ — ignore-preflight-errors=Mem” because K8s cluster require 2 CPU and at least 2GiB RAM. But in my case I’ve launched the Instance using t2.micro so to skip this kind of warnings I’ve included that args, you can change according to your requirements.
Wait for around 4 minutes. Following commands will appear on the output screen of the Master node.
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
This will also generate a kubeadm join command with some tokens.
If you wont be able to find the generated token then use the below command.
kubeadm token create --print-join-command
Now if you check the status of kubelet program its been active.
And also lots of container gets launched…
Step 11: Joining with the Master Node.
After initializing the cluster on the Master Node copy kubeadm join command from output of kubeadm init on Master node.
Run on both the Worker Nodes
kubeadm join <command_copied_from_master_node>
Now go to Master Node and run “kubectl get nodes” to check to see whether the slave is connected to Master or not.
kubectl get nodes
As you can see both the Master and Slave are connected but they are not in Ready state. That is because Kubernetes needs an “Overlay Network” so that pod-to-pod communication can happen properly. This overlay network is created by third-party CNI plugins. We are going to use Flannel to create that Overlay Network.
Step 12: Creating Overlay Network using CNI Plugin called Flannel.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
After installing Flannel, Overlay Network is created and both Master and Slave Nodes are now in the Ready State.
Now you can see both Master-Slave are now in Ready State.
Step 13: Configuring Flannel Plugin.
Now we need to configure the flannel plugin because you can see that in Master Node the CoreDNS Container is not yet Ready
kubectl get pods -n kube-system
To Configure Go inside the configuration file of flannel “/var/run/flannel/subnet.env” .As you can see the network range that my Master node have and flannel are different so we need to edit the network IP range accordingly
Master Node IP Range : 10.240.0.0/16
Flannel Network Range: 10.244.0.0/16
Editing the ConfigMap of Flannel…
Deleting the Pods with label app=flannel …
Relax! 😇We’ve not lost our Flannel Pods. As Flannel pods are launched via Deployment so ReplicaSet will launch the Pods Again
Now IP range of Master Node and Flannel are same. Also you can see my CoreDNS container also gets Ready.
Step 14: Let’s Test our Kubernetes Cluster via Deploying some Application over it.😎
Here I’m launching the pod via Deployment and exposing it to the outside world.
kubectl create deploy myd --image=vimal13/apache-webserver-phpkubectl get pods -o wide
Exposing the Deployment so that the outside world will have the connectivity…
kubectl expose deploy myd --type=NodePort --port=80kubectl get svc
Now, It’s Testing Time…🙈
Fetch the Public IP of Slave/Master and the port number provided by service resource.
PublicIP_ofslave/master:port_number
From Master’s IP…
From Slaves IP…
Voila! On port number 32171 my application is running🙈
NOTE: Even if the client hit the Application either from Master’s IP or Slave’s IP they will have the connectivity because of kube-proxy Program. Even though your Pod is not running in one of your Slave Node, Client will have connectivity because of kube-proxy Program. Isn’t this Amazing?
Hope you find my Blog Easy and Interesting!!🤞
Do like Comment and give a clap!!👏
That’s all. Signing off!😇
Thank You !