In my previous article, I had deployed a Kubernetes cluster How To Create An Azure Kubernetes Cluster and understood the basic concepts of Understanding Application Deployment On Kubernetes Cluster of deployment, now I am going to explain how we can deploy an Nginx Container on to the cluster.
I have created these two .yml files, you can go and execute these files for deployment.
These configuration files are used by the kubectl tool and based on the configuration of these files, The Kubernetes cluster will take all the contents of the file and do the deployment accordingly.
The first file called app.yml,
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployement spec: replicas: 1 selector: matchLabels: app: nginx-app template: metadata: labels: app: nginx-app spec: containers: - name: nginx image: nginx:1.17.0 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 250m memory: 256Mi ports: - containerPort: 80
In this file I have given all the configurations related to deployment, basically, in this deployment, Kubernetes is going to pick the Nginx image from Docker Hub,
Azure Kubernetes can also pick up the image from ACR(Azure container Registry).
Let’s go through some of the important sections of this deployment file.
- apiVersion: apps/v1
This section specifies the version of the deployment. When we update the deployment we can also update the version of deployment.
- kind: Deployment
It specifies that this is a deployment that needs to be carried out on the Kubernetes cluster.
Replica specifies how many replicas are required to be created for this application/container since this is a demo, I have mentioned only 1. Selector specifies which application is going to be associated with this replica.
This section specifies the definition of container which is going to be created on Kubernetes cluster with the name “nginx” and the image “nginx:1.17.0” will going to be pulled from Docker Hub in order to deploy on the container. The resources section specifies the configuration of the particular container in terms of CPU, memory, and maximum limit. I have also exposed the containerPort as 80 which is required to communicate for http requests. I have given the name of the application as “nginx-app” which will run on this container.
Now let’s talk about the second file service.yml
apiVersion: v1 kind: Service metadata: name: nginx-service spec: type: LoadBalancer ports: - port: 80 selector: app: nginx-app
Another file that is required for this deployment is called the service.yml file. It acts as a load balancer behind that our Nginx container will run on the Kubernetes cluster. This will allow reaching the home page of the application of the Nginx container which is running on the Kubernetes cluster.
Some important sections of this file are as follows,
This says this is a service type of deployment. Which will be a front service for the backend container.
It specifies the other required configuration of the service file like “type” which means it is a load balancer that is going to be exposed on port 80 in order to serve the http requests and the selector specifies the name of the application to communicate.
Now we have created our deployment and service file so let’s go through some Kubernetes commands those are going to be used to complete the deployment.
We will use the KuberCtl tool which I have mentioned in my previous articles in detail what is it and what is the use of this. Basically, it is used to communicate with the Kubernetes cluster.
As a first step, I will upload both files to azure cloud shell from my local machine. you need to login into the Azure portal to access the cloud shell.
Then will execute the commands one by one.
Kubectl apply -f app.yml
Kubectl apply -f service.yml
Kubectl get service nginx-service –watch
Here we can see that our nginx-service has gotten an external-IP which will be used to reach the application deployed on a container behind this load balancer service. It has been associated with port 80.
So, If I copy this external IP and try to access my deployment nginx application. We will be able to see the Nginx page that means deployment has successfully been completed.
Here what we have done that we have deployed our application to a POD that is running on a container on the Kubernetes cluster then we have taken the external-IP/publicIP that has been assigned to our service during the deployment so that all http requests can be redirected to the backend application “nginx-app” hence “nginx-service” acts as a load balancer here.
Kubetl delete services nginx-service
By using this command, we can delete our deployed service.
We have successfully deployed an Nginx-app to the Kubernetes cluster with a load balancer service that is running on a container. We have also discussed how we create the deployment and service file in order to execute the deployment. We have also seen how we define the image of the application which further be pulled from either Docker Hub/ Azure Container Registry as both are used as a registry to store the images, again that can be private and public. The image which we have used is a public image provided by Nginx. I hope this demo is useful for the learning path of Azure Kubernetes service and about its important concepts.