Automating Kubernetes Multi-Node Cluster and Deploying WordPress Application with MySQL Database on AWS using Ansible!
Hello Folks! In this Automation Blog Post I’ll show you the way to deploy WordPress Application with MySQL Database inside K8s Cluster on AWS using Ansible!!So, Guys let’s get going…
If you have ever set up a Kubernetes cluster on your own, then you definitely know how painful the task it is, it consumes a lot of time and also it has multiple stages for a perfect configuration of the same.
If you are new in Kubernetes and don’t know to how to setup the Kubernetes Multi-Node Cluster manually then check this out!
Deploying Kubernetes Multi-Node Cluster on AWS!
Hello Folks! In this Blog I’ll show the way to setup Kubernetes Multi-Node Cluster manually on AWS Cloud from scratch…
Automating Kubernetes Multi-Node Cluster and Deploying WordPress Application with MySQL Database using Ansible!!
What is WordPress?
In simple terms, WordPress is a Frontend Application which requires a Database to store it’s data.
So let’s proceed deep dive in this Practical →
Is there any Pre-requisites?
- Umm🤔! Yes, Ansible to be Installed and Configured.
- Some basics of Ansible is required.
- Boto and Boto3 library to be installed.
- Concept of Dynamic Inventory in Ansible is Required. You can Refer below given Blog.
Configuring Apache WebServer and Reverse Proxy using Dynamic Inventory with Ansible Playbook on AWS
Hello Folks! Welcoming you all to next Blog which gives you an glimpse of how to configure Apache Webserver and HaProxy…
- AWS account and IAM User to be created and store your access key and secret key.
The main purpose of this Blog is to Automate Kubernetes Cluster and deploy WordPress App with MySQL Database using Ansible Roles…
So, Let’s move on →
Here is the Config File for Ansible.
Step 1: Launching Amazon EC2 Instance using Ansible Playbook.
For achieving success in this Step you should have Boto and Boto3 Library. This is Python SDK which has the capability to contact to AWS Cloud.
pip3 install boto boto3
Now, create an Ansible Vault to secure your AWS Credentials i.e Access Key and Secret Key.
ansible-vault create <name.yml>
After this, write an Ansible Playbook for Launching EC2 Instance.
Here I’m going to launch 3 EC2 Instances. Out of 3 Instances , 1 will work as Kubernetes Master and the Other 2 will be our Slave/Worker Nodes.
So, here is the Playbook for same.
Now, It’s time to Launch our Playbook.
ansible-playbook — ask-vault-pass <name.yml>
Go the AWS Portal and have a look — 3 Instances has been launched and all is in the Running state.
Step 2: Creating Ansible Roles for Master and Slave Nodes.
Role is a way to manage the Playbook in an Efficient Manner.
Following is the command to create Roles in Ansible:
ansible-galaxy init <role_name>
Step 3: Configuring Master Node inside the k8s_master role.
To configure Master Node inside the role we’ve to go to our tasks set up inside the role /k8s_master/tasks/main.yml
In the main.yml file we’ve to write the steps to Configure our Master Node
So, here is the Playbook for Configuring Master Node.
I know I know the code is so so Giant 😅 As the Problem Statement looks so simple but actually it’s not. Lemme Explain the Code in a simple manner…
- For installing K8s, we need Docker as a Engine. So first step is to install the docker using yum command.
- Secondly, Start the services of Docker and make it permanently enable.
- Creating a proper yum repo file so that we can use yum commands to install the components of Kubernetes( kubelet, kubectl, kubeadm).
- Installing kubeadm, kubectl, and kubelet, iproute-tc Package using yum command.
- Enabling the Kubernetes Services(kubelet).
- Pulling the Config Images using kubeadm.
- Configuring the driver of Docker from cgroupfs to systemd.
- After changing the driver , restarting the Docker Service.
- Change the iptables.
- Initializing the Kubernetes Master. Here we’ve to set the CIDR and I have used “ — ignore-preflight-errors=NumCPU” and “ — ignore-preflight-errors=Mem” because K8s cluster require 2 CPU and at least 2GiB RAM. But in my case I’ve launched the Instance using t2.micro so to skip this kind of warnings I’ve included that args, you can change according to your requirements.
- Creating Overlay Network using CNI Plugin called Flannel.
- The next step is to save the token as while initializing master using kubeadm this generate a kubeadm join command with some tokens. For doing so, use add_host .
Step 4: Configuring Slave Nodes inside the k8s_slave role.
To configure slave node role, we’ve to go in the tasks setup of the k8s_slave role i.e. /k8s_slave/tasks/main.yml
In the main.yml file we’ve to write the steps to Configure our Slave Nodes
So, here is the Playbook for Configuring Slave Nodes.
Oops😅again the Code is so so Giant. Lemme Explain this too.
- Almost 90% of the steps is similar to Master Node the only change is you need to use that token which is generated by the Master.
So simple, right?😅
Step 5: Creating Roles for Launching WordPress App and MySql Database.
Along with roles of Master and Slaves we also role for WordPress and Mysql. So, altogether we’ve three roles.
In wordpress-mysql role will launch the Wordpress and Mysql pods respectively as well as will expose the WordPress Pod publicly.
Step 6: Configuring WordPress App and MySql inside the role.
Now we need to create a pod for the wordpess and mysql to launch them respectively.
To configure wordpress-mysql role, we’ve to go in the tasks setup of the wordpress-myqsl role i.e. /wordpress-mysql/tasks/main.yml
In the main.yml file we’ve to write the steps to Configure our WordPress App and MySql Database
So, here is the Playbook for same.
Hehe😅Isn’t this code simple, right? Umm🤔Yes, It is…
But lemme explain you this too…
- After the Master and Slave is Up and in Running State then we need to launch the pod for WordPress using this command:
kubectl run wp --image=wordpress:5.1.1-php7.3-apache
- Then we need to launch the pod for MySql using this command:
kubectl run db --image=mysql:5.7 --env=MYSQL_ROOT_PASSWORD=redhat --env=MYSQL_DATABASE=wpdb --env=MYSQL_USER=sumayya -- env=MYSQL_PASSWORD=redhat
These are the required environmental variable that need to be answered while launching the MySql pod. If you don’t use these variables then it will give error.
#mysql root password
MYSQL_ROOT_PASSWORD=redhat #mysql database name
MYSQL_DATABASE=wpdb #mysql user name
MYSQL_USER=sumayya #mysql password
Note: Here you can use secret resource of Kubernetes to secure your Database password.
- Now, we need to expose our WordPress Pod to the outside world so that the client will be able to hit on WordPress site using Public IP. Here, I’m exposing the Pod using Service called NodePort.
- After this — As we are automating whole thing in just one click so we don’t need to go inside the Master Node and check the exposed service port and the Database IP.
Step 7: Summing up the Triplet Roles in one Playbook.
Now, create a playbook in the root project directory as setup.yml
In this Playbook, I’ve used host as tag_Name_Master and tag_Name_Slave because I’m using Dynamic Inventory.
Step 8: Launching the Playbook.
Now , It’s high time to launch our Playbook…
Let’s Execute the Playbook
And there comes the Output of our Giant Playbook😎
Even though the Code was Big Enough but it’s feel great great when you do this kind of automation in just one click.
Just one command and whole Infrastructure has been Deployed. I mean, this is Amazing😇Isn’t it?
And look we’ve our Master and Slave Node Up and Running…
Step 9: Testing our WordPress App which is running Publicly.
It’s time to test our WordPress Application…🙈
Fetch the Public IP of your EC2 Instance and browse it in your Browser.
And the Installation is done successfully!!
And there come’s the Dashboard of our WordPress Application😎
Write the Interesting Blog and Publish it…
Voila! We’ve reached our Milestone. Yeahh😎
Hope you find this Blog Easy as well as Interesting!!🤞
Do like Comment and give it a clap!!👏
That’s all. Signing off!😇
Thank You !