« Back
in Docker GCP GKE Containers Kubernetes Golang read.
Deploying Docker Apps on Google Container Engine (GKE)

Deploying Docker Apps on Google Container Engine (GKE).

It's 2017 and Google says they'll be deploying in Australia this year. That may not excite so many as we all seem to be vested in AWS here but I urge you to take another look. Specifically at the ease and usability of how we can take Docker into production.

Many developers I have had the pleasure of talking to only use Docker in development environments and some have used it in CI and CD but the containers themselves don't run in Prod. We hear so many nightmare stories of ECS and problems. But I feel that GKE has done something beautiful. More competition is always better as well.

Why Containers?

So why is containerization so important? I don't like being locked to a specific vendor for anything. Be it programming languages, frameworks or even cloud providers. The trend seems to be going towards completely managed services and functions like Azure Functions, Google Functions and AWS Lambdas to name a few. Sure, that's a fantastic idea and I'm all for it. But testability and visibility still remains a problem. You also get roped into more of that platform's services for a complete solution. It is the way of the future and time slicing nodes just makes sense. Until then though, Docker is one way to be more cloud agnostic, it's more testable, and it solves the "It works on my machine" problem.


So today I'm going to do a quick summary on how easy it is to get running on GKE. I'll point out some of the things that were great, have a few visualizations and even go through Blue/Green deployment with 0 downtime. All this stuff is available on Google's own documentation tutorials as well and my Go slides: https://github.com/serinth/gke-example-go

To play along you'll need the following:

  • Go is installed and configured properly.
  • Google Cloud CLI installed and Kubernetes component installed.
  • Signed up to Google Cloud. You get $300 to trial everything and you will be prompted before they really charge you. But that's more than enough for this.
  • Native Docker installed. I'm doing this on a Windows 10 Pro machine with Hyper-V enabled but it will work on Linux and OS X as well.

Setting Up

First create your project on Google's cloud console. Name it whatever you like. Then in the CLI, authenticate and set your default project:

gcloud auth application-default login  
gcloud config set project <PROJECT NAME>  
gcloud projects list  

We also need to setup our Kubernetes config so ensure that the environmental variables KUBECONFIG is set to some place like $HOME/.kube/config. Create that file if it doesn't exist.

You will notice that you get a project id. This is pretty important as that will be referenced many times including when we push our Docker image onto Google Container Registry (GCR).

Basic Web Application

Our web application simply has a health endpoint and outputs its version info:

package main

import (  


func main() {  
    router := gin.Default()
    router.GET("/health", health)
    router.GET("/info", info)

func health(c *gin.Context) {  
    c.JSON(http.StatusOK, gin.H{"status": "OK"})

func info(c *gin.Context) {  
    c.JSON(http.StatusOK, gin.H{"version": "v1", "name": "Foo"})

If you're using the presentation framework to run my slides. You can just click on run.

What we want to do now is Dockerize this application and push it to GCR.

Compile the app for a statically linked ELF binary for Linux (Yes, I created an ELF binary for Linux on Windows) and build the Docker image:

CGO_ENABLED=0 GOOS=linux go build -a --ldflags="-s" --installsuffix cgo -o webapp  
docker build -t gke-webapp:v1 .  

Now we tag it with a version number and push it to GCR:

docker tag gke-webapp:v1 asia.gcr.io/dius-158701/gke-webapp:v1  
gcloud docker -- push asia.gcr.io/dius-158701/gke-webapp:v1  

Note that dius-158701 is the project id. Also when you first push an image to GCR only you have admin rights to that image. Underneath the hood, Google Cloud Storage is used to host the Docker image. We can see this here:

Docker image on Google Cloud Storage

Kubernetes Terminology

  • Nodes are the actual virtual machines where the containers will run
  • Pods are the smallest unit and can contain 1 or more containers
  • Replication Controllers manage autoscaling of pods. Automatically adds pods or removes pods based on number of desired pods and if any fail.
  • Services are a logical set of pods and a policy. It is a microservice. Services expose the pods and can act as a load balancer to the pods.
  • Deployments are a higher level abstraction than replication controllers. Provides declarative updates for Pods and Replica sets in YAML.

We're going to use Deployments as it's the next iteration in replication controllers.

GKE runs Kubernetes which is nice. Because Kubernetes is open source and we can even use it on any other cloud provider. This is important as we try to still remain more cloud agnostic.

Cluster Setup

Before creating a cluster we need to name it and tell it which regions and zones it will live in. Here's a list of Google's regions and zones.

gcloud config set container/cluster <NAME>  
gcloud config set compute/region asia-northeast1  
gcloud config set compute/zone asia-northeast1-a  

Now let's launch a 2 node cluster:

gcloud container clusters create dius-cluster --zone asia-northeast1-a --num-nodes 2  

By default, we get logging enabled with Stackdriver. So we can always go into the console and query the logs and have a visual. Kubernetes is great beause logrotate is built in so you don't have to worry about running out of space.

Do note though that the cluster is only in one region/zone. We can have this pool auto-scale and be multi-zoned but I'll leave that as an exercise to you, dear reader.

To issue further commands, we need to store the credentials certificate and other settings for Kubernetes CLI.

gcloud container clusters get-credentials dius-cluster  

You only have to do this once per cluster. Now your kubectl command will reference this cluster.

Deploying the WebApp

Let's take a look at a Kubernetes Deployment yaml file:

apiVersion: extensions/v1beta1  
kind: Deployment  
  name: webapp-deployment
  replicas: 2
        name: webapp
        environment: dev
      - name: webapp
        image: asia.gcr.io/dius-158701/gke-webapp:v1
        - containerPort: 80

It's fairly simple to read. The important bits are:

  • replicas which tell us how many containers to run.
  • labels which service discovery, built into Kubernetes will reference to find your containers. The ports on the nodes don't really matter. The container is 80 and on the host node, it's randomized. However, by creating a service in the next step, we'll always be able to reference our app on port 80.
  • image is the Docker image we uploaded earlier.

Alright, let's launch these containers!

kubectl create -f webapp-deployment.yaml  


Keen on modifying the number of containers running? Just update the replicas and do:

kubectl apply -f webapp-deployment.yaml  

Then you'll get the new number of instances.

Visualizing the Cluster

There's also a container to help us visualize what's happening on our cluster. Use the weavescope.yaml provided on the Github page. Deploy it the same way we did with the webapp.

Now we need to tell Kubernetes to proxy to our local machine:

kubectl port-forward $(kubectl get pod --selector=weave-scope-component=app -o jsonpath='{.items..metadata.name}') 4040  

Then navigate to localhost on port 4040:
weave scope visualizing kubernetes cluster

Click around to see what's running. We should see our pods running on the nodes as well. As we continue in this post, you can keep referring to the visualizations to see what's actually happening on the cluster as we do updates and change the number of desired containers.

Exposing the WebApp as a Service

Take a look at webapp-service.yaml:

apiVersion: v1  
kind: Service  
  name: webapp
    app: webapp
    role: service
    tier: backend
  type: LoadBalancer
    # the port that this service should serve on
  - port: 80
    name: webapp
    environment: dev

Notice the selector. We're targetting the labels earlier so that it knows which containers to expose. This will return a public IP address so that we can actually hit the service. All requests will be load balanced between the containers. To deploy this issue the same command again:

kubectl create -f webapp-service.yaml  

Isn't that easy?!

Blue/Green Deployment

So how do we update our application without any downtime? First, go through the same steps above and changed the info path to output a different version.

Then change the image in our deployment, run kubectl apply and watch the all the glory happen. Kubernetes will use a rolling-update scheme where it will bring up new replicas and slowly switch the traffic over to the new containers. We can check the pods get updated in weave scope as this happens.

I've included a watcher application in the repo to get the info endpoint every 2 seconds. Just replace localhost with the IP address of the service.

You should see something like this:

Blue Green rolling update google container engine and kubernetes


I wanted to treat this post as a bit of a reference as well so I kept it brief. There's definitely more details in the GitHub repo than here. I hope this becomes a useful reference to you too because in all honesty you'll have to muck around with the concepts anyway as you work through it but it won't be hard to pick up and I didn't want to make this to be all inclusive of every concept but enough so you can understand how easy and powerful it is to deploy to a managed cluster.

As always, thanks for reading!

comments powered by Disqus