Let’s Play with Kubernetes.

Sadil Chamishka
7 min readNov 16, 2019

Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. There are few more container-orchestration systems, but Kubernetes has become the leading service.

Let’s assume a Python web application is deployed over 2 virtual machines including 2 containers. HAProxy is set up as a load balancer. Requests come to HAProxy and load is distributed among the 4 containers.

The web application is serving very well and there is no problem with deployment. But after the deployment is done someone has to be aware of all is well as initial deployment. If all of a sudden a container is down, a new container has to be deployed manually. If one of the virtual machines goes down, then things got even worse. lot’s of manual configurations have to be done to preserve the previous state of the service.

What if there is someone who preserves the state of the service as it is. if a container goes down new container will be deployed. If a virtual machine goes down, use another active virtual machine and deploy the containers on it. It’s awesome if there is an orchestrator who can do this orchestration. That’s why container orchestration services came to the picture.

The standing person is the manager and others are the workers if we compare with Kubernetes' perspective.

The master node is included with a set of components, in order to manage his worker nodes. Before the end of this article, I will explain to you, how the communication happens between the master node and worker nodes in order to keep the deployment in the stable. Bored with theories?

Let’s start to play with Kubernetes.

You can set up a Kubernetes cluster locally by setting up a few virtual machines. Installations and configurations to set up the Kubernetes cluster will be not straight forward for a beginner. So what we can do?.

Cloud services provide Kubernetes as a service. Then it is much easier to set up a Kubernetes cluster. AWS provides EKS and GCP provides GKE.

I prefer to use the GKE (Google Kubernetes Engine). First, you can create a free account in GCP (Google Cloud Platform) and $300 credits will be given to you, to get familiar with GCP. But you will be asked for a supported credit/debit card number. But you don’t want to worry. you will not be charged until those free credits are over.

After successfully creating your account, you will be prompted the dashboard consist of a lot of services provided by Google Cloud Platform.

Let’s create our Kubernetes cluster using the Kubernetes Engine provides by GCP. Following these steps, Kubernetes Engine →Clusters →Create cluster

Then you will be asked for the configurations before setup the cluster.

Name:- you can provide a name for your cluster, Ex: “standard-cluster-1”

Location type:- you can keep with the default values. It just asks where that infrastructure should be setuped.

Master version:- you can go with default values.

Number of nodes:- you can configure how many worker machines you need for your cluster. I gave 2.

1vCPU, 3.75GB memory instances are enough for our demo purposes.

There are some advanced configurations that can be done, but the above-mentioned minimum configurations are enough for a beginner to set up a cluster. Then click to create the cluster. It will take some time to set up the cluster because a lot of configurations and installations have to be carried out.

  • 2 virtual machines have to setup
  • Docker has to be installed in each of them as the Docker is used as the default container runtime.
  • Kubletes have to be set up, which is the Kubernetes agent resides in each instance in order to communicate with the Kubernetes master node.

That’s all, now you will be prompted as follows with details of your cluster.

Now we have set up the cluster, we can play with it and after you play, remember to delete your cluster. Otherwise, your free credits will over soon.

You can interact with your cluster via the Google cloud shell. You can find it on the top right corner of the blue color navigation bar.

You will be prompted a terminal to execute your commands. When we create the Kubernetes cluster, it will be fallen under a project. So we can view the projects by clicking My First Project tab on the top blue color navigation bar.

You have to copy the ID and execute this command first.

gcloud config set project <ID>

Then you will be working under that project. Also, you have to set credentials to access the Kubernetes cluster.

gcloud container clusters get-credentials  <cluster-name ex:standard-cluster-1> --zone <zone ex:us-central1-a>

Now you can communicate with the master node of your cluster. I will show you a set of commands to communicate with the master. Before that let’s focus on what is going on behind the scene.

Google Kubernetes Engine allows us to create worker nodes only. The master node is created by GKE and the workers are simply get registered with the master.

Here kubectl is used to send requests to the api-server via the terminal

I said that the master node consists of a set of components to help him.

api-server:- expose a rest api to communicate with the cluster.

cluster Store: -Cluster state and config management.

scheduler: -Watches api-server for new pods and assign a node to work

controller: -A daemon that watches the state of the cluster to maintain the desired state

Each worker node includes these components. Kubelet is the Kubernetes agent who interacts with the master. By default the container runtime is Docker.

You may not understand how those components interact. Let’s assume we need to deploy our application on our cluster.

kubectl run <name of app> --image=<docker image> --port=8080kubectl run profilelogtest --image=sadilchamishka/profilelogtest:latest --port=8080

One Pod will be deployed with one container inside, in one of worker nodes

Pod can have one or more containers. The pod is the basic unit of Kubernetes terminology. Now let’s see what happen when we execute the above command.

kubectl send a request to the API server via the terminal. API Server validates the request and persists it in the cluster store. Cluster store notifies back API Server. API Server invokes the scheduler. The scheduler decides where to run the pod and sends that to information to API Server to persist on cluster store. API Server invokes the Kubelets inside the corresponding nodes. Those Kubletes talk to Docker daemon of each worker node to create the container. Kubelet sends the status of the pod to API Server and that information is persisted on cluster store.

Now you have done a deployment. Let’s see the status of it by talking with our master node. we can see how many pods in our cluster

kubectl get pods

We can get how many deployments are made

kubectl get deployment

Let’s scale our deployment

kubectl scale deployment profilelogtest --replicas=2

Now we have 2 pods and those pods can be on one worker node or one Pod per node. Let’s assume 10 pods are deployed. end-users have to know every pod IP address to get the service. It is not a good idea. Therefore we can configure a load balancer on top of the pods.

kubectl expose deployment profilelogtest --type=LoadBalancer

Now, end-users can send requests to the load balancer and requests will be distributed over the pods. you can get the public IP of the load balancer by

kubectl get services

You can see 2 services are running in our cluster. “profileapilogtest” is the deployment we made. Kubernetes is the other service that runs in our cluster behind the scene by doing the container orchestration.

--

--