I’ve really been feeling the love for Go recently. I love its simplicity. I love its superb developer experience. I love how my tests run faster and my microservices spin up faster then I can blink! I love that it has so many utilities that I need for creating microservices built in to its standard library. Oh and did I mention that its explicit error handling is a joy – so much nicer then being blind-sided with the exception throwing so common in many other languages. It’s just a truly wonderful language and ecosystem to work with especially when it comes to backend services.
In a previous post, I spoke about how Kubernetes simplifies the tasks of deploying and managing microservices in the cloud. Managing a whole suite of microservices together with the infrastructural elements they depend on e.g. message queues and databases can lead to quite a lot of Kubernetes specifications though. It’s a lot of yaml to be managing! Also, how do we manage all of this across all our different environments? This is where Helm comes in. It looks after parameterising and packaging all the Kubernetes specifications needed to deploy and manage all the elements that make up the system that we want to deliver.
The great thing about using Helm and Kubernetes is that we can replicate the prod environment that we are deploying to on our local machines. A Kubernetes cluster can be created with ease locally with tools such as Minikube, Rancher Desktop and others. We can deploy to our local Kubernetes environment with Helm in the same way we deploy to cloud based environments!
But wait! If everything is running in our local Kubernetes cluster, isn’t that a total pain constantly having to spin up pods or get our source code changes compiled into running pods? Nope! Skaffold does this for us and more!
courses-service
To put all of this into action, we’re going to imagine that we are building a small part of an e-learning platform. We are going to build the courses-service. This is a microservice that handles data related to the courses that are offered on the platform. We’re going to keep this pretty simple. All the microservice is going to do for now is offer an API to create a new course and to retrieve all existing courses. It will store these courses as records in a table in a Postgres DB.
Getting set up
To follow along, you will need Go installed . You also need a local Kubernetes cluster. Rancher Desktop does the job here and it also includes helm and kubectl ,which is a command line tool for interacting with our Kubernetes cluster. These can be enabled in “Supporting Utilities”. If you get a permissions error here though, no worries, we can always just add them to the PATH.

On M1 MacOS, these utilities are located here:
/Applications/Rancher\ Desktop.app/Contents/Resources/resources/darwin/bin
This can be added to your PATH in .zshrc or .bashrc etc. (depending on which shell you use)
export PATH=/Applications/Rancher\ Desktop.app/Contents/Resources/resources/darwin/bin:$PATH
IDE
Goland, like all Jetbrains tools, is an amazing IDE for Go development. An alternative, which is free, is to use VSCode with the Go plugin. I use Goland as I’ve been using Jetbrains IDEs for years so they are pretty embedded in my muscle memory but VSCode is great too!
Lets get coding!
We can create our courses-service as a Go module by creating a directory and running a go command to initialize a new module as shown below. As this e-learning platform is all about exchanging know-how, let’s call our module path “swapknowhow/courses”.
mkdir -p swapknowhow/courses
cd swapknowhow/courses
go mod init swapknowhow/courses
This microservice is going to store course information as records in a postgres db. However, its too soon to start thinking about the specific db technology that we are going to use. Let’s just concentrate on building out the core use cases. We have 2 use cases. We want to be able to create a new course and we want to be able to get back all the courses that have been created.
This is a very thin microservice with little or no logic. In this case, it can be simply tested at the Api level. If this were something like Java Spring Boot, I would shy away from this because it would involve spinning up a local server to run my tests which is slow! In Go there is a package in the standard library for testing at the api level without needing to even spin up a server. Instances of Go ResponseWriter and Request can easily be created and passed to the handler functions that we want to test.
I use TDD whenever I feel it makes sense for what I’m working on. I used it in building out this microservice. This process is very iterative. I write enough of a test that is sufficient to fail and then write the production code to make it pass followed by some refactoring if needed. A lot of the time, the test I write that is sufficient to fail only fails because it can’t compile – usually because I haven’t created the core production code types and functions it depends on. So, in this case, creating these is enough to make the current test pass. It would be very unwieldy for the reader for me to go through every step of that iterative process in showing how I built out this microservice. So I will, instead, show the final api test that I ended up with and explain the main parts. Then I will show the core production code that makes this all pass. This core code doesn’t know anything about a postgres db. It just knows that we will use a repository with a defined method set. Once the api tests and core production code are written, hooking in an implementation of the repostiory that uses a postgres db, is pretty quick. Ideally, there would also be an integration test to test the full end-to-end of making requests to the microservice and the microservice using postgres. Perhaps that can be the subject of a future blog post 🙂
So, with all that being said lets dive in. The real purpose of this blog post is to show how we can use Kubernetes, Helm and Skaffold to build and package our service. So I won’t dwell on the core Go code too much. The full code along with helm and skaffold configurations is available on my Github.
The code for the API handlers is going to be in a directory called api. The api test code will go in here too. So the directory structure is as follows.

Our api tests are in courses_api_test.go. Since there are just two use cases – create a course and get all courses, this can really be tested in one test function – for the happy path at least.
This test is in the function below:
func TestCanCreateAndRetrieveCourses(t *testing.T) {
api := Api{CoursesRepo: newInMemoryCoursesRepositoryStub()}
courseToCreate := courses.Course{
Name: "test course",
Rating: 5,
Descripton: "course to test",
DurationMillis: 50,
}
jsonBody, _ := json.Marshal(courseToCreate)
body := strings.NewReader(string(jsonBody))
courseCreationRequest := httptest.NewRequest("POST", "/courses", body)
courseCreationResponseRecorder := httptest.NewRecorder()
api.Courses(courseCreationResponseRecorder, courseCreationRequest)
courseCreationResponse := courseCreationResponseRecorder.Result()
defer courseCreationResponse.Body.Close()
if courseCreationResponse.StatusCode != 201 {
t.Errorf("expected course courseCreationResponse status: %v actual: %v", 201, courseCreationResponse.StatusCode)
}
req := httptest.NewRequest("GET", "/courses", nil)
recorder := httptest.NewRecorder()
api.Courses(recorder, req)
res := recorder.Result()
defer res.Body.Close()
bytes, err := ioutil.ReadAll(res.Body)
if err != nil {
t.Errorf("error reading from http response writer, %v", err)
}
var coursesResponse []courses.Course
json.Unmarshal(bytes, &coursesResponse)
if len(coursesResponse) != 1 {
t.Errorf("Expected coursesResponse length :%v, got %v", 1, len(coursesResponse))
}
course := (coursesResponse)[0]
if course != courseToCreate {
t.Errorf("Expected :%v, got %v", courseToCreate, course)
}
}
The main thing to see here is that we exercise our endpoints by calling a method on our Api struct type (I will show this in a bit). This method handles GET and POST requests and delegates to its own private handlers. So, we call the endpoint to create a course here
courseCreationRequest := httptest.NewRequest("POST", "/courses", body)
courseCreationResponseRecorder := httptest.NewRecorder()
api.Courses(courseCreationResponseRecorder, courseCreationRequest)
and we call the endpoint to get all created courses here:
api.Courses(recorder, req)
res := recorder.Result()
defer res.Body.Close()
bytes, err := ioutil.ReadAll(res.Body)
if err != nil {
t.Errorf("error reading from http response writer, %v", err)
}
var coursesResponse []courses.Course
json.Unmarshal(bytes, &coursesResponse)
The rest of this test function is all about setting up test data – i.e. a Course (instance of a struct that I will show in a little bit) and asserting responses from our endpoints. I think to explain things further, we need to get into the production code.
In Go, the code that you want to remain private to your module goes inside a directory called “internal”. Our core types can all go in here in one Go file called courses.go in a Go package called “courses”. The directory structure is shown below. You can ignore the build and db directories for now.

The contents of the courses.go file is shown below
package courses
import (
"github.com/gofrs/uuid"
"time"
)
type Course struct {
Uuid uuid.UUID
Created time.Time
Name string
Rating int
Descripton string
DurationMillis int
}
type CoursesRepository interface {
GetCourses() []Course
CreateCourse(course Course)
}
A course is represented as a struct called Course. It has a uuid to uniquely identify a course. The other fields are simple data fields that one would expect to find in a data type describing a course on most e-learning platforms – duration, rating etc.
We also defined the CoursesRepository interface. This is a method set that types need to implement in order to conform to this interface. In our case here, it’s simply a method to return a slice of Courses and one to create a course in the repository.
With that in place, the rest of our api test should make more sense. the first line of our test function contained
api := Api{CoursesRepo: newInMemoryCoursesRepositoryStub()}
Api is simply a struct with a method called Courses to expose our handler for GET and POST requests. It also has a field, CoursesRepo, to allow us to inject different implementations of our CoursesRepository interface. For the api test, we inject an in-memory repo where courses are simply saved to and retrieved from a slice. This in-memory implementation is also in our courses_api_test.go.
type CoursesRepositoryStub struct {
courses []courses.Course
}
func (r *CoursesRepositoryStub) GetCourses() []courses.Course {
return r.courses
}
func (r *CoursesRepositoryStub) CreateCourse(course courses.Course) {
r.courses = append(r.courses, course)
}
func newInMemoryCoursesRepositoryStub() *CoursesRepositoryStub {
return &CoursesRepositoryStub{courses: make([]courses.Course, 0, 10)}
}
So the only thing remaining to make our api test pass is to implement the Api struct along with its Courses method. This goes in a file called courses_api.go in the same directory as our courses_api_test.go. The contents of this (excluding package declaration and imports) is shown below:
type Api struct {
CoursesRepo courses.CoursesRepository
}
func (api *Api) Courses(writer http.ResponseWriter, req *http.Request) {
switch req.Method {
case "GET":
api.getCourses(writer, req)
case "POST":
api.createCourse(writer, req)
default:
writer.Write([]byte("Invalid method"))
writer.WriteHeader(400)
}
}
func (api *Api) createCourse(writer http.ResponseWriter, req *http.Request) {
courseJson, err := ioutil.ReadAll(req.Body)
if err != nil {
fmt.Printf("error reading request body %v\n", err)
writer.WriteHeader(500)
return
}
var course courses.Course
err = json.Unmarshal(courseJson, &course)
if err != nil {
fmt.Printf("error deserializing course %v \n", err)
writer.WriteHeader(500)
return
}
api.CoursesRepo.CreateCourse(course)
writer.WriteHeader(201)
}
func (api *Api) getCourses(writer http.ResponseWriter, _ *http.Request) {
coursesJson, err := json.Marshal(api.CoursesRepo.GetCourses())
if err != nil {
fmt.Printf("error marshalling courses %v", err)
writer.WriteHeader(500)
} else {
_, err := writer.Write(coursesJson)
if err != nil {
fmt.Printf("error writing response %v", err)
}
}
}
There is actually only one public method here – the Courses method. In Go, capitalisation matters. A function is made public simply by having an uppercase character as the first character in its name as opposed to a lowercase character in the case of a private function. Our Courses method inspects the http method on the http request that it receives and delegates to a specific private method accordingly. These private methods, in turn, call out to the CoursesRepo on the Api struct to create a course or retrieve courses. The rest of the code is dealing with errors and using the Go standard library code for json marshalling and unmarshalling.
Adding Postgres
So for our courses-service microservice to work in the real world, we’ll need a proper data store. Postgres does the job here. So we need an implementation of our CoursesRepository interface that will call out to a postgres db. This is handled in another package within our internal directory. The package is simply called postgres and the code is in a Go file called postgres_repository.go with the directory structure shown below:

This uses a Go library called pgx which is available on Github. To retrieve this and add it as a dependency to our module, we can simple run the following from the root of our module.
go get github.com/jackc/pgx/v4
This library gives us a connection pool to connect to our db. As we will see shortly, we can run our postgres db locally via Kubernetes. We will set up the db to run on port 5432 and we will port-forward to the Pod it runs in to enable our courses-service microservice to connect to it. The dbname will simply be coursesdb. So our connection string will be as follows:
"user=postgres password=password host=localhost port=5432 dbname=coursesdb"
For now, we will just hard-code this connection string to point to our local postgres that will be running in a local Kubernetes Pod. Later, we will make this configurable. We can set this up with a struct and a function to create a new instance of the struct as follows:
type PostgresCoursesRepository struct {
dbPool *pgxpool.Pool
Close func()
}
func NewPostgresCoursesRepository() *PostgresCoursesRepository {
connection := "user=postgres password=password host=localhost port=5432 dbname=coursesdb"
dbPool, err := pgxpool.Connect(context.Background(), connection)
if err != nil {
log.Fatal("db connection could not be established")
os.Exit(1)
}
return &PostgresCoursesRepository{
dbPool: dbPool,
Close: func() { dbPool.Close() },
}
}
Now we need to give this struct a method set that conforms to our CoursesRepository interface and uses the pgx library to interact with postgres as follows:
func (repo *PostgresCoursesRepository) GetCourses() []courses.Course {
rows, err := repo.dbPool.Query(context.Background(), "select * from courses;")
if err != nil {
fmt.Printf("Error querying courses in db: %v\n", err)
}
defer rows.Close()
var retrievedCourses []courses.Course
for rows.Next() {
var course courses.Course
err := rows.Scan(&course.Uuid, &course.Created, &course.Name, &course.Rating, &course.Descripton, &course.DurationMillis)
if err != nil {
fmt.Printf("Error parsing course row %v\n", err)
}
retrievedCourses = append(retrievedCourses, course)
}
return retrievedCourses
}
func (repo *PostgresCoursesRepository) CreateCourse(course courses.Course) {
insert := `INSERT INTO courses(uuid, created, name, rating, description, duration_millis)
VALUES (gen_random_uuid(), now(), $1, $2, $3, $4 );`
_, err := repo.dbPool.Exec(context.Background(), insert, course.Name, course.Rating, course.Descripton, course.DurationMillis)
if err != nil {
fmt.Printf("Error inserting course into db: %v\n", err)
}
}
The last thing we need to do in terms of Go code is to hook this up in our main package. This will be in our main.go file in the root of our module as follows:
import (
"log"
"net/http"
"swapknowhow/courses/api"
"swapknowhow/courses/internal/courses/db/postgres"
)
func main() {
coursesApi := api.Api{CoursesRepo: postgres.NewPostgresCoursesRepository()}
http.HandleFunc("/courses", coursesApi.Courses)
log.Println("starting courses service on port 8082")
err := http.ListenAndServe(":8082", nil)
if err != nil {
log.Fatal("could not start courses service")
}
}
Running Postgres in Kubernetes
So, with our Go code in place, we need to turn our attention to our infrastructure. How do we spin up our postgres db locally. We’ll can initially do this by deploying to our local Kubernetes cluster directly with the kubectl command line interface. Later, we will do this via helm instead. Then we can take it further and package up our courses-service code into an image that can also be deployed to our Kubernetes cluster with helm – Scaffold will help us out here!
Lets create a postgres Pod spec and deploy it to our local Kubernetes cluster running through Rancher Desktop. When Rancher Desktop is up and running, it automatically configures the kubectl command line client to connect to the Kubernetes cluster that Rancher Desktop is running on its lightweight VM. We can confirm this by running the following command in our terminal.
kubectl config current-context
This should print
rancher-desktop
In a previous blog post, I described the main types of manifests that can be given to a Kubernetes cluster via the Kubernetes API in order to give it the specifications of what we want it to deploy. In order to run postgres in Kubernetes, we need to create a Kubernetes manifest containing a Pod specification and also, if we want data to persist across pod restarts, a PersistentVolume and PersistentVolumeClaim specification. This is can all be specified in one file. We’ll call it “postgres-pod.yaml” and it is in the directory structure shown below:

The Pod specification includes the name of our pod, the postgres image to use and postgres environment variables among other specificaitons. It also contains a lifecycle hook with a shell command that will wait until postgres is up before executing a sql command to create our courses table:
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "apt update && apt install -y netcat && while ! nc -z localhost 5432; do sleep 1; done && psql -U postgres -d coursesdb -c 'CREATE TABLE IF NOT EXISTS courses(uuid uuid, created timestamp, name varchar, rating int, description varchar, duration_millis int);'"]
The full specification along with the perisitent volume spec is shown below:
apiVersion: v1
kind: Pod
metadata:
labels:
app: coursesdb
name: coursesdb
spec:
containers:
- args:
- postgres
env:
- name: POSTGRES_DB
value: coursesdb
- name: PGDATA
value: coursesdb
- name: POSTGRES_PASSWORD
value: password
image: docker.io/library/postgres:latest
name: coursesdb
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "apt update && apt install -y netcat && while ! nc -z localhost 5432; do sleep 1; done && psql -U postgres -d coursesdb -c 'CREATE TABLE IF NOT EXISTS courses(uuid uuid, created timestamp, name varchar, rating int, description varchar, duration_millis int);'"]
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5432
hostPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: coursesdb-pv-claim
volumes:
- name: coursesdb-pv-claim
persistentVolumeClaim:
claimName: postgres-pv-claim
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: coursesdb
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Okay, so with all that in place, now we can send it to the Kubernetes API of our local Rancher Desktop cluster via the kubectl command line tool by running the below in our terminal from the root of our module.
kubectl apply -f build/kubernetes/postgres-pod.yaml
Executing the following should show that the pod started up okay.
kubectl get events
This should output something like this (it won’t be exactly the same):
LAST SEEN TYPE REASON OBJECT MESSAGE
42s Normal Scheduled pod/coursesdb Successfully assigned default/coursesdb to lima-rancher-desktop
42s Normal Pulling pod/coursesdb Pulling image "docker.io/library/postgres:latest"
39s Normal Pulled pod/coursesdb Successfully pulled image "docker.io/library/postgres:latest" in 2.605928168s
39s Normal Created pod/coursesdb Created container coursesdb
39s Normal Started pod/coursesdb Started container coursesdb
With our postgres pod up and running in our local Kubernetes cluster, we need to be able to connect to it. We can do this by setting up port-forwarding to the pod by running the following in our terminal:
kubectl port-forward coursesdb 5432:5432
With our postgres db running, we can start up our microservice by running this from the root of our module:
go run .
Now we can use something like Postman to take our API for a spin


Packaging our system as a Helm release
Helm is a package manager for Kubernetes. It allows us to template out the Kubernetes specifications that need to be sent to our Kubernetes cluster in order to bring our desired system to life. A helm chart is a declarative way for defining how we want to template and package the system that we are deploying to Kubernetes. We can then create instances of this chart by providing values for helm to inject into the chart templates. Helm calls an instance a “release”. A release is an instance of our helm-packaged system running in a Kubernetes cluster.
We can create a skeleton for our helm chart in our build directory by running the following in that directory:
helm create courses
After running this, we can see that helm has created the following files and directories:

The template directory contains templated Kubernetes manifests. The values.yaml contains default values to be injected into these templates before they are applied to Kubernetes when a helm release is being created. The Chart.yaml contains a description of our helm chart. Also .helmignore is where we can specify files and directories to be ignored by helm.
So lets start with just converting our postgres pod and persistent volume into helm templates.
Inside the templates directory we can remove everything. Then we can create a templated specification for our postgres coursesdb by creating a file called (name is arbitrary) postgres-pod.yaml
The contents of this will be pretty similar to the postgres-pod.yaml we had before except certain parts will be templated so that we can get helm to inject values. The contents is as follows:
apiVersion: v1
kind: Pod
metadata:
labels:
app: {{.Release.Name}}-coursesdb
name: {{.Release.Name}}-coursesdb
spec:
containers:
- args:
- postgres
env:
- name: POSTGRES_DB
value: {{.Release.Name}}-coursesdb
- name: PGDATA
value: {{.Release.Name}}-coursesdb
- name: POSTGRES_PASSWORD
value: {{.Values.coursesdb.password}}
image: {{.Values.coursesdb.image}}
name: {{.Release.Name}}-coursesdb
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "apt update && apt install -y netcat && while ! nc -z localhost 5432; do sleep 1; done && psql -U postgres -d {{.Release.Name}}-coursesdb -c 'CREATE TABLE IF NOT EXISTS courses(uuid uuid, created timestamp, name varchar, rating int, description varchar, duration_millis int);'"]
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5432
hostPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: {{.Release.Name}}-coursesdb-pv-claim
volumes:
- name: {{.Release.Name}}-coursesdb-pv-claim
persistentVolumeClaim:
claimName: {{.Release.Name}}-postgres-pv-claim
Helm uses Go templates. Parameters to be injected are referenced inside double curlies
{{}}
When helm creates a release, it binds values to these parameters by referencing fields and functions specified on objects. There is a built in object called Release. This has a field called Name. To reference this Name field, we use {{.Release.Name}}. This is used in quite a few places in the template so that things like the db name will be prefixed with the name of the helm release. When we instruct helm to create a release, we pass the release name as a command line argument. Helm then assigns this to the Release.Name field which is why it is available when helm is rendering our templates. In the above template, we have also templated out other parts by referencing fields on a Values object. This values object is manifested from the values.yaml file that I mentioned earlier. This has default values but they can also be overwritten in different ways – one of which is by passing the path to a values file as a command line argument when creating a helm release. For example, I have the POSTGRES_PASSWORD templated as {{.Values.coursesdb.password}}. This value is specified in the values.yaml as:
coursesdb:
password: password
Really a password such as this should be coming from a secret management system but that is beyond the scope of this blog post.
We can template out our persistent volume in a file in templates called (arbitrary name) postgres-persistent-volume.yaml. Its contents are as follows:
kind: PersistentVolume
apiVersion: v1
metadata:
name: {{.Release.Name}}-postgres-pv-volume
labels:
type: local
app: {{.Release.Name}}-coursesdb
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
Also we need to template out our persistent volume claim. This can be done in a file (again, arbitrary name) called, say, postgres-persistent-volume-claim.yaml
Its contents are as follows:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{.Release.Name}}-postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
We just need to then make sure we have our values added to the default values that helm created when it created the values.yaml. So we need to add the following to that file:
coursesdb:
password: password
image: docker.io/library/postgres:latest
We can create a release of this helm chart called “local” by running the following in the build/courses directory.
helm install local .
To see that this has worked, after a few minutes, when we run
kubectl get pods
local-coursesdb 1/1 Running 0 23s
We need to port-forward to our postgres pod as before:
kubectl port-forward local-coursesdb 5432:5432
We also need one minor change to tell Go code to connect to a db now called local-coursesdb. This is in postgres_repository.go
func NewPostgresCoursesRepository() *PostgresCoursesRepository {
connection := "user=postgres password=password host=localhost port=5432 dbname=local-coursesdb"
...
...
With this in place we can run our courses-service from the root directory of our module
go run .
Making our courses-service configurable
Lets get rid of the hard-coding of the postgres connection string in postgres_repository.go. We can make this configurable by creating a struct to carry the config and making a small change to our NewPostgresCoursesRepository function. Changes to postgres_repository.go are shown below:
type PostgresConfig struct {
User string
Password string
Host string
Port int
DatabaseName string
}
func NewPostgresCoursesRepository(config PostgresConfig) *PostgresCoursesRepository {
connection := fmt.Sprintf("user=%s password=%s host=%s port=%d dbname=%s",
config.User, config.Password, config.Host, config.Port, config.DatabaseName)
...
...
Now we can pass in config from main.go as follows:
func main() {
coursesApi := api.Api{CoursesRepo: postgres.NewPostgresCoursesRepository(
postgres.PostgresConfig{
User: "postgres",
Password: "password",
Host: "localhost",
Port: 5432,
DatabaseName: "local-coursesdb"})}
...
...
Let’s go a step further and make our service configurable via environment variables. Luckily the Go standard library makes consuming environment variables very straight forward.
With the below change in main.go, we can consume from environment variables instead
func main() {
pgUser, exists := os.LookupEnv("POSTGRES_USER")
if !exists {
log.Fatal("No POSTGRES_USER env variable")
}
pgPasswrod, exists := os.LookupEnv("POSTGRES_PASSWORD")
if !exists {
log.Fatal("No POSTGRES_PASSWORD env variable")
}
pgHost, exists := os.LookupEnv("POSTGRES_HOST")
if !exists {
log.Fatal("No POSTGRES_HOST env variable")
}
pgPort, exists := os.LookupEnv("POSTGRES_PORT")
if !exists {
log.Fatal("No POSTGRES_PORT env variable")
}
pgPortInt, err := strconv.Atoi(pgPort)
if err != nil {
log.Fatal("No POSTGRES_PORT env variable must be a number")
}
pgDbName, exists := os.LookupEnv("POSTGRES_DB_NAME")
if !exists {
log.Fatal("No POSTGRES_DB_NAME env variable")
}
coursesApi := api.Api{CoursesRepo: postgres.NewPostgresCoursesRepository(
postgres.PostgresConfig{
User: pgUser,
Password: pgPasswrod,
Host: pgHost,
Port: pgPortInt,
DatabaseName: pgDbName})}
...
...
To run this we’ll need to set the required environment variable first on the command line
export POSTGRES_USER=postgres
export POSTGRES_PASSWORD=password
export POSTGRES_HOST=localhost
export POSTGRES_PORT=5432
export POSTGRES_DB_NAME=local-coursesdb
go run .
Creating an image of the courses-service
In order to run our courses-service in Kubernetes, we will need to create an image. This can be done via Docker – the Docker command line interface also comes with Rancher Desktop. We just need a Dockerfile in the root of our module containing the following:
FROM golang:1.18-alpine
WORKDIR /swapknowhow/courses
COPY go.mod ./
RUN go mod tidy
EXPOSE 8084
COPY . .
COPY /api ./api
COPY /internal ./internal
COPY *.go ./
RUN go build
CMD ["./courses"]
Now we can build a local image by running the following in the root of our module:
docker build --tag swapknowhow/courses-service .
Running it all on Kubernetes via Helm
Now that we made our courses-service configurable, it will be a lot easier to deploy it on Kubernetes and pass in the environment variables it needs via a Kubernetes ConfigMap. Let’s deploy it as a Kubernetes Deployment behind a Kubernetes Service. I gave an overview of Kubernetes Pods, Deployments and Services in a previous post. We can put all of these specifications into one file as a helm template. Let’s call this courses-service.yaml and it goes in the same directory as our other helm templates.

The contents of this file contains the template for the Deployment, ConfigMap and Service and is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{.Release.Name}}-courses-service-deployment
spec:
selector:
matchLabels:
app: {{.Release.Name}}-courses-service
replicas: 1
template:
metadata:
labels:
app: {{.Release.Name}}-courses-service
spec:
containers:
- name: {{.Release.Name}}-courses-service
image: {{ .Values.courses.image }}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8084
envFrom:
- configMapRef:
name: courses-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: courses-config
data:
POSTGRES_USER: {{ .Values.courses.postgres.user | quote}}
POSTGRES_PASSWORD: {{ .Values.courses.postgres.password | quote }}
POSTGRES_HOST: {{.Release.Name}}-{{ .Values.courses.postgres.dbName }}
POSTGRES_PORT: {{ .Values.courses.postgres.port | quote }}
POSTGRES_DB_NAME: {{.Release.Name}}-{{ .Values.courses.postgres.dbName }}
---
apiVersion: v1
kind: Service
metadata:
name: {{.Release.Name}}-courses-service
labels:
app: {{.Release.Name}}-courses-service
spec:
type: NodePort
ports:
- port: 8082
selector:
app: {{.Release.Name}}-courses-service
One thing to note here is that the variables on the config map have their values coming from our values.yaml file. e.g. for postgres user
{{ .Values.courses.postgres.user | quote}}
The value is piped through a template function called quote to surround it in quotes. Another part to note here is:
image: {{ .Values.courses.image }}
imagePullPolicy: IfNotPresent
The name of the image corresponds the the docker image we built earlier. It will be injected into the template from values.yaml. Also, note that we specify IfNotPresent for the image pull policy. This means that the local image we built earlier will be used.
We need our values.yaml to match up and the values that our template expects.
We can add the following to values.yaml.
courses:
image: swapknowhow/courses-service:latest
postgres:
user: postgres
password: password
port: 5432
dbName: coursesdb
In order for our courses-service Kubernetes Service to talk to our coursesdb, we need to extend the postgres-pod.yaml template we created earlier. Lets rename this file postgres-service.yaml. In this file, instead of defining the postgres pod directly, we will define a Kubernetes Deployment and a Kubernetes Service. The contents of this file is as follows:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{.Release.Name}}-{{ .Values.courses.postgres.dbName }}-deployment
spec:
selector:
matchLabels:
run: {{.Release.Name}}-{{ .Values.courses.postgres.dbName }}
app: {{.Release.Name}}-{{ .Values.courses.postgres.dbName }}
replicas: 1
template:
metadata:
labels:
run: {{.Release.Name}}-{{ .Values.courses.postgres.dbName }}
app: {{.Release.Name}}-{{ .Values.courses.postgres.dbName }}
spec:
containers:
- args:
- postgres
env:
- name: POSTGRES_DB
value: {{.Release.Name}}-{{ .Values.courses.postgres.dbName }}
- name: PGDATA
value: {{.Release.Name}}-{{ .Values.courses.postgres.dbName }}
- name: POSTGRES_PASSWORD
value: {{.Values.coursesdb.password}}
image: {{.Values.coursesdb.image}}
name: {{.Release.Name}}-{{ .Values.courses.postgres.dbName }}
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "apt update && apt install -y netcat && while ! nc -z localhost 5432; do sleep 1; done && psql -U postgres -d {{.Release.Name}}-{{ .Values.courses.postgres.dbName }} -c 'CREATE TABLE IF NOT EXISTS courses(uuid uuid, created timestamp, name varchar, rating int, description varchar, duration_millis int);'"]
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5432
hostPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: {{.Release.Name}}-{{ .Values.courses.postgres.dbName }}-pv-claim
volumes:
- name: {{.Release.Name}}-{{ .Values.courses.postgres.dbName }}-pv-claim
persistentVolumeClaim:
claimName: {{.Release.Name}}-postgres-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: {{.Release.Name}}-{{ .Values.courses.postgres.dbName }}
labels:
app: {{.Release.Name}}-{{ .Values.courses.postgres.dbName }}
spec:
ports:
- port: 5432
targetPort: 5432
selector:
app: {{.Release.Name}}-{{ .Values.courses.postgres.dbName }}
One thing to note here is that we have named the service with templated values
name: {{.Release.Name}}-{{ .Values.courses.postgres.dbName }}
This is important as it will be the postgres hostname our courses-service microservice will be injected with in order for it to connect to the this postgres service. This can be seen if we look back at courses-service.yaml from earlier at this line:
POSTGRES_HOST: {{.Release.Name}}-{{ .Values.courses.postgres.dbName }}
The Kubernetes cluster that Rancher Desktop creates includes CoreDNS which allows service discovery by way of the namespace and name of the service. We also need a couple more values in our values.yaml to be consumed by this template:
coursesdb:
password: password
image: docker.io/library/postgres:latest
Again, the full code listing is available on my Github. With all of this in place, lets deploy our system to our local Kubernetes cluster using helm! Lets delete our previous helm release as follows:
helm delete local
Now to deploy our new release, we can run the below in the root of our module.
helm install local build/courses
We should see something similar to the following:
NAME: local
LAST DEPLOYED: Fri Apr 15 19:23:15 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
We can see our services on Kubernetes by running the following:
kubectl get svc
This will show something similar to the following:
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 9d
local-coursesdb ClusterIP 10.43.52.88 <none> 5432/TCP 2m17s
local-courses-service NodePort 10.43.127.123 <none> 8082:32203/TCP 2m17s
So that we can make requests to our courses-service and also connect to the db to test things out, we can set up port forwarding as follows:
kubectl port-forward services/local-courses-service 8082:8082
and in a separate terminal tab
kubectl port-forward services/local-coursesdb 5432:5432
Now, again, we can make a request in Postman to create a course:

I use Datagrip to connect to databases. After creating a course, and thanks to the port forwarding we set up, I can run a query for it on the postgres db (running inside our Kubernetes).
select * from courses;
with the result shown below:

I can retrieve the created course by calling our courses-service through Postman.

Simplifying our workflow with Skaffold
Skaffold allows us to make code changes and have them automatically deployed to our Kubernetes cluster. It also sets up automatic port forwarding to our Kubernetes services so that we don’t have to do that manually like we did earlier. Instructions to install Skaffold are here
In MacOS it can be installed as follows:
brew install skaffold
After skaffold is installed we can add a skaffold.yaml file at the root of our module containing the following:
build:
local:
push: false
artifacts:
- image: swapknowhow/courses-service
kind: Config
deploy:
helm:
releases:
- name: local
chartPath: ./build/courses
artifactOverrides:
imageKey: swapknowhow/courses-service
image: swapknowhow/courses-service
imageStrategy:
fqn: {}
Skaffold tracks an image being built by helm so that it can substitute it. In this way, Skaffold, in dev mode, can monitor changes to our source code, and rebuild the image and redeploy the necessary Pod in our Kubernetes cluster. It has a number of strategies for doing this. One is called fqn and is the default so, technically, it could have been left out of the config above. It references an image in helm through an image key which we specified in the above configuration as:
imageKey: swapknowhow/courses-service
This should not include the tag. We need a small change to our helm template, courses-service.yaml. The image for the container needs to be set as follows:
image: {{ .Values.imageKey }}
We also need to add the following to our values.yaml:
imageKey: swapknowhow/courses-service:latest
Skaffold will actually override the above value.
So, now we can get our helm release up and running through Skaffold. We will do this in dev mode so that Skaffold will watch for code changes and re-build the image and re-depoly for us. Skaffold takes care of port-forwarding for us automatically if we also include a parameter in our command below:
skaffold dev --port-forward
Running this command will output something like the following:
Listing files to watch...
- swapknowhow/courses-service
Generating tags...
- swapknowhow/courses-service -> swapknowhow/courses-service:84d44f8-dirty
Checking cache...
- swapknowhow/courses-service: Found Locally
Tags used in deployment:
- swapknowhow/courses-service -> swapknowhow/courses-service:2c6ce8400bcc0d8b59095569248e00b1d81184ee4c3053d1d388822aaf80506d
Starting deploy...
Helm release local not installed. Installing...
NAME: local
LAST DEPLOYED: Fri Apr 15 21:01:22 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
Waiting for deployments to stabilize...
- deployment/local-courses-service-deployment is ready. [1/2 deployment(s) still pending]
- deployment/local-coursesdb-deployment: waiting for rollout to finish: 0 of 1 updated replicas are available...
- deployment/local-coursesdb-deployment is ready.
Deployments stabilized in 24.184 seconds
Port forwarding service/local-coursesdb in namespace default, remote port 5432 -> http://127.0.0.1:5432
Port forwarding service/local-courses-service in namespace default, remote port 8082 -> http://127.0.0.1:8082
...
...
[local-courses-service] 2022/04/15 20:03:34 starting courses service on port 8082
Now let’s make a code change! In courses_api.go, let’s add a print statement when handling a POST request to create a course.
case "POST":
fmt.Println("Creating Course")
api.createCourse(writer, req)
On save, we can see Skaffold re-build the swapknowhow/courses-service image with similar output as before.
Now if we hit the courses-service endpoint to create a course from Postman, we see the following in the logs emitted by Skaffold.
Watching for changes...
[local-courses-service] Creating Course
Isn’t that amazing! We can make live code changes and have them deployed to our local Kubernetes cluster on the spot!
Goland Cloud Code plugin
This is made even simpler with the Cloud Code plugin for Goland.

With this installed, it detects our Skaffold configuration automatically.

We can simply click to create a cloud code Kubernetes run configuration. The plugin will also install a managed Google Cloud SDK

When that is all complete, we will see our run configuration.

We can kill our currently running skaffold dev with ctrl-c in the terminal. Now we can click play to run skaffold through Goland. It might give this error:
parsing skaffold config: error parsing skaffold configuration file: unknown skaffold config API version "skaffold/v2beta28". Set the config 'apiVersion' to a known value. Check https://skaffold.dev/docs/references/yaml/ for the list of valid API versions.
If so, we can just use an earlier version of the Skaffold api, e.g. skaffold/v2beta24
So, we just need to change the top line of our skaffold.yaml to:
apiVersion: skaffold/v2beta24
Also, I found sometimes, running skaffold either on the command line or through the Cloud Code plugin after it has been run successfully before can output something like:
Error: UPGRADE FAILED: "local" has no deployed releases
deploying "local": install: exit status 1
Cleaning up...
release "local" uninstalled
If this happens, Skaffold will have cleaned up itself so simply run it again through the Cloud Code plugin or from the command line.
So, we can go ahead and click that play button in Goland and this time we will see output like we did when Skaffold ran successfully from the command line, only this time, the output will be in Goland and will eventually emit the log from our courses-service:
[local-courses-service] 2022/04/15 20:39:04 starting courses service on port 8082
Now we can create and retrieve courses as we did before by hitting our courses-service endpoints from Postman!
Conclusion
It can take a little bit to get everything set up. However, when our Helm chart and Skaffold config is set up, we get a wonderful dev experience right on top of a local Kubernetes cluster. Also, once it is all set up like this, it makes it seamless for other engineers on your team to spin up the exact same dev environment on their machines. The amazing thing is, it doesn’t just have to be on our local cluster. We could easily point kubectl to a Kubernetes cluster running in the cloud e.g. Google Kubernetes Engine (GKE) and get the same ephemeral environment while writing our code locally! Thanks for reading through to the end of this post. I had great fun writing it and building out the courses-service microservice in Go right on top of a Kubernetes cluster running locally. The full code is on my Github.
For updates on new posts, you can follow me on twitter or linkedin.