Deliver rapidly with Kubernetes

There are many benefits to microservices. One of my favourites is how they enable teams to work independently and to continuously deploy to production. However, there is a lot to think about:

  • how do we make them resilient and fault-tolerant?
  • how can they scale elastically and cost-effectively?
  • how do they find each other and/or the message queues they need to communicate?
  • how can we monitor them and receive feedback from them to allow us to deliver better value?

Luckily Kubernetes handles a lot of these concerns for us.

Getting started with Kubernetes and cloud-native development can be daunting. So lets take it step-by-step in creating and deploying a microservice to a Kubernetes cluster and exposing it to the outside world.

We don’t want to be laden with the onerous tasks of creating and managing our own Kubernetes cluster. This is where GKE on Google Cloud comes in. It handles the creation and managing of Kubernetes clusters, so we don’t have to. Before we dive into building and deploying our microservice, lets back up and take a whistle-stop tour of what Kubernetes is, and the Kubernetes objects we will be creating to complete our task.

Kubernetes Primer

Kubernetes is an open-source platform that abstracts away the complexities of managing distributed systems. Being infrastructure agnostic, it can be run on a developer’s laptop, in private datacenters, or in the cloud. The key to this abstraction is Kubernetes’ declarative specification approach. We give Kubernetes declarations of the requirements of our distributed system. These can include:

  • the container image for each application/microservice and other components such as databases and message queues;
  • the images for the containers that make up a single deployable unit (called a Kubernetes Pod), e.g. a microservice
  • the number of copies of each deployable unit, called "replicas", that should always be up and running (called a Kubernetes Deployment)
  • the way in which copies of a deployable unit will be exposed and load balanced behind a single IP address
  • the configuration and secret management for each deployable unit.

These types of specifications can be declared once and Kubernetes can bring them to life anywhere. Kubernetes consumes these specifications through the Kubernetes API. It stores them as Kubernetes "objects" and does its damndest to make sure that the live running system always conforms to these specifications.

Fundamental Kubernetes Components and Objects

To get our microservice up and running and replicated on Kubernetes, we will need a quick intro into some of its fundamental components and building blocks. To make this more concrete, it helps to know what we are trying to build and deploy on a Kubernetes cluster. My kids are having loads of fun learning about dinosaurs these days. So what better to use as an example microservice then a dinosaur microservice. Our dinosaur-service will have one very simple REST endpoint to get back a list of dinosaurs. Each dinosaur in the list will have a name, a phonetic spelling of the name, its diet and its length. The dinosaur-service will be built as a docker image so that it can be easily pulled down from Docker Hub by the machines on our Kubernetes cluster.

The Kubernetes Cluster

A Kubernetes cluster can have one, but usually more, machines. Each machine is a Node. Some of these act as the cluster Control Plane. This is the brain of the cluster. It accepts and stores specifications about a desired distributed system to be run on the cluster and ensures that the system always conforms to these specifications. It also reports on the status of the system and of individual components in the system. The other nodes in the cluster are Worker Nodes. These are the nodes on which our applications actually run. There are usually multiple worker nodes. The control plane will get instances of our dinosaur-service up and running on the worker nodes. To do this though, it needs to know vital information like:

  • how many replicas of the dinosaur-service do we want?
  • what image does each worker node need in order to spin up our microservice containers?
  • how should the load be distributed across the replicas?
  • how will the replicas will be exposed to the outside world so that we can actually make requests to them and get back our dinosaurs?

All of these answers and more can be provided to the control plane as specifications via the cluster Kubernetes API. These are stored as Kubernetes objects.

Figure 1 shows an overview of the Kubernetes components that are fundamental to understanding how replicas of our dinosaur microservice will be deployed. It also shows instances of one of the Kubernetes objects that we will use – the Kubernetes Pod. There are other important Kubernetes objects at play here called a Kubernetes Deployment and a Kubernetes Service.These will be explained in due course.

Figure 1

As shown in Figure 1, we create specifications for the desired state of our system. These are posted to the Kubernetes API which is a component running in the control plane. They are then saved as Kubernetes objects to a data-store – etcd. Components called Controllers also run in the control plane. These look after interacting with other Kubernetes components – via the Kubernetes API – and with outside components such Cloud services in order to realise the desired state of our system. One of these controllers is the Kubernetes Scheduler. It processes our Kubernetes objects and interacts with a Kubernetes component – via the Kubernetes API – on each worker node called the Kubelet. The Kubelet, in turn, interacts with the container runtime (e.g. Docker, Containerd) on its worker node to create the desired container instances to run our applications. These are shown in Figure 1 as a "dinosaur-container" on each worker node. Naturally one would believe that a container is the smallest deployable unit in Kubernetes. However this is not the case. The Kubernetes Pod is the smallest deployable unit. Our specifications that we send to the Kubernetes API include Pod specifications. The Kubernetes API stores Pod objects in etcd and the Scheduler looks after instructing each Kubelet to instantiate them. This is shown in Figure 1 as "dinosaur-pod-1", "dinosaur-pod-2" and "dinosaur-pod-3".

The Kubernetes Pod

Containers such as Docker containers are essentially Linux processes. When we say an application is running "inside a container", it sounds like it is running inside a Virtual Machine (VM) on top of Linux. However, it is actually running as a Linux process. The trick is that, for all intents and purposes, the application "thinks" its running in its own isolated machine. Linux achieves this by carefully splitting out resources between each of these processes (each "container"). For the purpose of explaining how a Pod can be the smallest unit of deployment in Kubernetes, Linux "namespaces" are the most useful. Namespaces are a mechanism Linux uses to divide different types of resources between processes such that, from the process’ point of view, there has been no division of the resource. For example, it is this mechanism that allows two or more processes running on the same Linux OS to listen on one particular port e.g. 8080. From the point of view of each process, it is running on native Linux OS port 8080. However, in reality each 8080 port is virtual and belongs to the Network namespace of the process that is listening on it. There are different types of Linux namespaces, some of which are:

  • The PID namespaces – namespacing of process ids
  • The Network namespaces – namespacing of network stacks and ports etc.
  • The Mount namespaces – namespacing of mount points.
  • The Unix Time Sharing (UTS) namespaces – namespacing of hostnames etc.

Multiple processes can share the same namespace for each type of namespace. Further, two or more processes can share one or more types of namespaces and not share other types of namespaces. This is how a Pod can be the smallest deployable unit in Kubernetes. Containers that make up a particular pod share certain namespace types such as the Network namespace with other containers in the same pod. However, these containers don’t share some other types of namespaces. It means, for example that each Pod can have its own internal IP address. There can be multiple copies of an instance of a Pod spread out across multiple nodes. However, the containers that make up each Pod instance have to be completely deployed on one machine.

The Kubernetes Deployment

In order to provide redundancy, we deploy multiple copies, called "replicas" of our microservices. We spread out our replicas across nodes and, often in the cloud, across availability zones. So, if one node goes down, our replicas on other nodes have our backs. Managing these replicas across nodes and ensuring that the correct number are always running would be a very onerous task for one microservice. For many microservices, it becomes virtually impossible. Luckily, this is something we don’t have to think through. Kubernetes handles this for us. Like other Kubernetes objects, such as Pods, we simply provide Kubernetes a specification including the number of replicas of our microservice that should always be up and running. Kubernetes calls this type of specification a Deployment.

The Kubernetes Service

So, great, we can get our dinosaur-service Pods up and running and adhering to a Deployment object. Each pod has its own IP address. So once we expose our pods to the outside world via our Cloud load balancer, we should be good to go? Well, not really, if one of the Pods crashes or if the node it is running on goes down, Kubernetes, will create another Pod to replace it. As mentioned earlier, Kubernetes does its damndest to make sure that our microservice replicas adhere to their Deployment specification. The replacement Pod, most likely, has a completely different IP address and could be running on a different node. This poses a problem. How on earth can we keep track or where all our Pods are running at any moment in time? Luckily we don’t have to! Kubernetes does this by way of its Service objects. A Service is a specification of a set of Pods that we want to be exposed on a single IP address. It also includes other requirements such as how we want traffic to be load-balanced across the set of Pods that make up the Service.

So now we have an understanding of the Kubernetes components and objects that we will leverage to deploy replicas of our dinosaur-service and expose it via a single IP address. Let’s turn our attention to the dinosaur-service itself and get it implemented and packaged up into a docker image.

The Dinosaur Microservice

The microservice consists of one simple endpoint:

/dinosaurs

It returns a JSON response body containing a hard-coded list of dinosaurs:

[
  {
    "Name": "Coelophysis",
    "Pronunciation": "seel-OH-fie-sis",
    "LengthInMeters": 2,
    "Diet": "carnivorous"
  },
  {
    "Name": "Triceratops",
    "Pronunciation": "tri-SERRA-tops",
    "LengthInMeters": 9,
    "Diet": "herbivorous"
  },
  {
    "Name": "Tyrannosaurus",
    "Pronunciation": "tie-RAN-oh-sore-us",
    "LengthInMeters": 12,
    "Diet": "carnivorous"
  },
  {
    "Name": "Diplodocus",
    "Pronunciation": "DIP-low DOCK-us",
    "LengthInMeters": 26,
    "Diet": "herbivorous"
  },
  {
    "Name": "Panoplosaurus",
    "Pronunciation": "pan-op-loh-sore-us",
    "LengthInMeters": 7,
    "Diet": "herbivorous"
  }
]

This microservice can be implemented in one Go file, in this case, called dinosaur-service.go The contents are as follows:

dinosaur-service.go

package main

import (
	"encoding/json"
	"fmt"
	"log"
	"net/http"
)


type Dinosaur struct {
	Name           string
	Pronunciation  string
	LengthInMeters int
	Diet           string
}

func main() {
	dinosaurs := []Dinosaur{
		{
			Name:           "Coelophysis",
			Pronunciation:  "seel-OH-fie-sis",
			LengthInMeters: 2,
			Diet:           "carnivorous",
		},
		{
			Name:           "Triceratops",
			Pronunciation:  "tri-SERRA-tops",
			LengthInMeters: 9,
			Diet:           "herbivorous",
		},
		{
			Name:           "Tyrannosaurus",
			Pronunciation:  "tie-RAN-oh-sore-us",
			LengthInMeters: 12,
			Diet:           "carnivorous",
		},
		{
			Name:           "Diplodocus",
			Pronunciation:  "DIP-low DOCK-us",
			LengthInMeters: 26,
			Diet:           "herbivorous",
		},
		{
			Name:           "Panoplosaurus",
			Pronunciation:  "pan-op-loh-sore-us",
			LengthInMeters: 7,
			Diet:           "herbivorous",
		},
	}

	http.HandleFunc("/dinosaurs", func(writer http.ResponseWriter, request *http.Request) {
		jsonResponse, err := json.Marshal(dinosaurs)
		if err != nil {
			log.Println("cannot serialize dinosaurs")
		}

		_, err = fmt.Fprintln(writer, string(jsonResponse))

		if err != nil {
			log.Println("Could not write dinosaurs to the response")
		}
	})

	log.Println("Dinosaur service starting on port 8084")
	err := http.ListenAndServe(":8084", nil)
	if err != nil {
		log.Fatal("dinosaur-service could not start")
		return
	}
}

This consists of a Dinosaur struct type to contain data for each of our dinosaurs. The main function simply creates a Go slice of Dinosaurs and sets up a web server to listen on port 8084, and a handler function for our /dinosours endpoint. The handler function simply serializes the dinosaurs slice to json and sends it back on the http response. The great thing about using Go is that this microservice can be fully implemented in one file and only using Go standard libraries i.e. without the need to include any third-party libraries.

With Go installed, we can build and run the dinosaur-service as follows:

go build dinosaur-service.go
go ./dinosaur-service

Then we can hit our /dinosaurs endpoint using curl, Postman or whatever our favourite http client is by hitting:

http://localhost:8084/dinosaurs
Accept: application/json

In order for our dinosaur-service to be deployable to a Kubernetes cluster, it will need to be packaged as acontainer image. We can do this with Docker by including the following Dockerfile in the same directory as our dinosaur-service.go file.

Dockerfile

FROM golang:1.16

COPY . .

EXPOSE 8084

RUN go build dinosaur-service.go

CMD ["./dinosaur-service"]

With this in place, the following command will build the docker image:

docker build . -t [DOCKER_USER_ON_Docker Hub]/dinosaur-service:latest

Because this image will later be retrieved from Docker Hub so that it can be used to create containers in our Kubernetes Pods, it needs to be made available on Docker Hub. The [DOCKER_HUB_USERNAME] needs to be replaced with the username for an account on Docker Hub. Also, this should be the username of the account currently logged into Docker Hub on our local machine.

To push the image built from the previous command to Docker Hub, we use:

docker push [DOCKER_HUB_USERNAME]/dinosaur-service
Again the [DOCKER_HUB_USERNAME] placeholder needs to be replaced with the username for a Docker Hub account.

Deploying the Dinosaur Microservice to Kubernetes

GKE is the Kubernetes offering on Google Cloud. It makes it straight forward to create Kubernetes clusters and to interact with them via the standard Kubernetes command line client, kubectl which can be downloaded here. A Google Cloud account is needed to use GKE. Disclaimer! Be careful though to thoroughly understand the costs involved before using GKE or any other offering on Google Cloud.

The gcloud command line client can be used to create a Kubernetes cluster on GKE(https://cloud.google.com/sdk/docs/install). With gcloud installed, we can log into our Google Cloud account with the following:

gcloud auth login

We can then set our current Google Cloud project that gcloud points to with:

gcloud config set project [PROJECT_NAME]
replacing [PROJECT_NAME] with a Google Cloud project. For those new to Google Cloud, it has good documentation explaining Google Cloud Projects here.

We can create a Kubernetes cluster on GKE with the following:

gcloud container clusters create [CLUSTER_NAME] --num-nodes 3
replacing [CLUSTER_NAME] with the name we would like to give to our cluster. The -num-nodes 3 argument specifies that we want 3 nodes in our cluster. With kubectl already installed, the above gcloud command will also configure kubectl to point to our newly created cluster. This can be confirmed by running the following:

kubectl config current-context

The number of nodes in our cluster can later be scaled back down to 0 nodes using:

gcloud container clusters resize [CLUSTER_NAME] --size 0

Now that our cluster is in place, we need to provide it with a specification for dinosaur-service deployments including the number of replicas we require. We do this through supplying the cluster with a Deployment object via the cluster Kubernetes API. The deployment object can be specified with a yaml file of a name of our choosing. Let’s call this dinosaur-deployment.yaml The contents of this file are as follows:

kind: Deployment
metadata:
  name: dinosaur-deployment
spec:
  selector:
    matchLabels:
      app: dinasour-service
  replicas: 3
  template:
    metadata:
      labels:
        app: dinasour-service
    spec:
      containers:
        - name: dinasour-service
          image: [DOCKER_HUB_USERNAME]/dinosaur-service
          ports:
            - containerPort: 8084

The placeholder in this file, [DOCKER_HUB_USERNAME], needs to be replaced with the Docker Hub username that we used earlier when we pushed our dinosaur-service image to Docker Hub. This Kubernetes object can be sent to the Kubernetes API with the following:

kubectl apply -f dinosaur-deployment.yaml

We can then check the status of our deployment using:

kubectl kubectl get deployments

At first this might show something like the following:

NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
dinosaur-deployment   0/3     3            2           26s

This means that no Pods are up and running yet. However, eventually, when we run the command again, it will show something like the following:

NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
dinosaur-deployment   3/3     3            3           27s

So now that we have our dinosaur-service Pods up and running, we can expose them to the outside world by giving the Kubernetes API a specification of a Kubernetes Service object. We will use a type of Kubernetes Service object called LoadBalancer. This, in turn will instruct our cluster Control Plane to set up a Google Cloud load balancer to route traffic to our cluster nodes. Also based on the Service object, Kubernetes sets up the IP-tables on each worker node via a Kubernetes component running each worker node called kube-proxy. This all ensures that each request to our dinosaur microservice reaches one of the dinosaur-service Pod replicas.

The Service object can be posted to the Kubernetes API by sending a yaml specification file. It can also be done through the following kubectl command:

kubectl expose deployment dinosaur-deployment --type=LoadBalancer --port 8084

We can get the details of this Kubernetes service with:

kubectl get svc

This will show something like:

NAME                  TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
dinosaur-deployment   LoadBalancer   10.87.253.183        8084:31928/TCP   19s

Running the command again will eventually show the assigned external IP:

NAME                  TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)          AGE
dinosaur-deployment   LoadBalancer   10.87.253.183   35.198.77.68   8084:31928/TCP   94s

Based on the external IP that is shown above, we can now hit our /dinasours endpoint using Postman or any other http client of choice:

GET http://35.198.77.68:8084/dinosaurs
Accept: application/json

This returns our list of dinosaurs!

[
    {"Name":"Coelophysis","Pronunciation":"seel-OH-fie-sis","LengthInMeters":2,"Diet":"carnivorous"},
    {"Name":"Triceratops","Pronunciation":"tri-SERRA-tops","LengthInMeters":9,"Diet":"herbivorous"},
    {"Name":"Tyrannosaurus","Pronunciation":"tie-RAN-oh-sore-us","LengthInMeters":12,"Diet":"carnivorous"},
    {"Name":"Diplodocus","Pronunciation":"DIP-low DOCK-us","LengthInMeters":26,"Diet":"herbivorous"},
    {"Name":"Panoplosaurus","Pronunciation":"pan-op-loh-sore-us","LengthInMeters":7,"Diet":"herbivorous"}
]

Conclusion

I hope this blog has given a taster for what Kubernetes has to offer and how it can be leveraged for rapid delivery. Thanks for reading!

2 Replies to “Deliver rapidly with Kubernetes”

  1. Nice one Tom. Thanks for your time on writing such a detailed one. Awaiting your next post on taking it to next level with details on how to monitor my 🦖🦕 🙂

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: