How to develop Kubernetes-friendly containerised applications, part 1

There’s a lot out there that describes how to setup Kubernetes and how to make your container run on Kubernetes, but fairly little in regard to how you should be *developing* for Kubernetes. I hope to provide you with some guidance for this! Which is a bit ironic, as I’m actually not a developer but a sysadmin. So keep in mind that you’re free to write better code than mine, the code I provide here is for educational purposes only.

This installment of the post will just set up the basics. We’ll end up with a local “cluster” (using minikube) and even smaller application that we can run on it and examine. Nothing fancy, just settings things up for the next post in which we’ll start tackling some of the problems we’ll encounter.

I’ll be writing my code in Python 3, but even if you’re not proficient with Python 3, I hope the code is simple enough that you can follow along in your own prefered language. I’ll assume you’re
using Python 3.5 or higher (I’ve only tested my code on Python 3.5.3 on Linux).

A simple application

Let’s start with a simple example application, my_webserver.py:

import http.server

class MyWebpage(http.server.BaseHTTPRequestHandler):
    def do_GET(s):
        s.send_response(200)
	s.send_header('Content-Type', 'text/html')
	s.end_headers()
	s.wfile.write(b'''
<!DOCTYPE html>
<html>
    <head>
        <title>Hello!</title>
    </head>
    <body>
        <p>This is a demo page!</p>
    </body>
</html>''')

if __name__ == '__main__':
    httpd = http.server.HTTPServer(('127.0.0.1', 8080), MyWebpage)
    httpd.serve_forever()

When you run this script with python3 my_webserver.py, it will open port 8080 on your lo device. You can check if it works when you open another terminal and do:

$ echo "GET / HTTP/1.0" | nc localhost 8080
HTTP/1.0 200 OK
Server: BaseHTTP/0.6 Python/3.5.2
Date: Mon, 02 Oct 2017 12:30:09 GMT
Content-Type: text/html


<!DOCTYPE html>
<html>
    <head>
        <title>Hello!</title>
    </head>
    <body>
        <p>This is a demo page!</p>
    </body>
</html>

You can of course also use a browser to visithttp://127.0.0.1:8080, which will show you a simple page with a single line on it.

This script is but a simple HTTP server, as it allows me to show you what problems you can encounter and how to deal with them. If you want to do something more elaborate, I would recommend using a framework of some sort, like Django or Flask. But for this series of post, a simple HTTP server will suffice.

Setting up Kubernetes

Next, we need to run this on Kubernetes. As this is just a demonstration, I’m going to use minikube to show how this would work on Kubernetes. I’m using version 0.22.2, but it should work on later versions as well. Start out with starting your cluster:

$ minikube start
Starting local Kubernetes v1.7.5 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
$ kubectl config use-context minikube
Context "minikube" modified.
$ eval $(minikube docker-env)
$

So, we now have a local Kubernetes running inide a VM, we’ve setup kubectl to actually use it by default (which saves us from typing --context minikube for each kubectl invocation).
Next, we’re going to create a simple Docker image for our application inside the minikube VM (so we do not have to publish it anywhere online).

Packaging our application

To do so, we’re going to create a file called Dockerfile. This is the content:

FROM python:alpine3.6
WORKDIR /usr/src/app
COPY my_webserver.py ./
CMD ["python", "./my_webserver.py"]

For those unfamiliar with Dockerfiles, the first line makes sure we use the upstream python:alpine3.6 as our base. This is a small image available from the Docker hub that can easily be modified for running Python apps. Most languages (NodeJS, Ruby, Java, etc.) have these kind of base images available on Docker hub, you can search for them here (don’t worry about creating an account, you don’t need it to just use the images). As Docker hub is the default repository, you do not need to add a hostname in that line.

The next line sets the working directory, in this case essentiatly doing a cd /usr/src/appto make sure all following commands are run from that directory. Than we copy our little my_webserver.pyto that local directory (and thus creating /usr/src/app/my_webserver.py within the image). Lastly, we tell how the application can be started. The python command is part of the image and because we’ve set a working directory, we can just use ./my_webserver.py as the location of our script.

We then build our own custom image, based on the python:alpine3.6 image, like so:

$ docker build -t my_webserver:0.1 .

This command will build an image using the Dockerfile in this directory and name it my_webserver:0.1. Keep in mind here that we’re using the Docker engine running within the minikube VM, as we ran the eval command for that after starting minikube. The image will therefore be built inside the VM and can be accessed by Kubernetes directly. That’s helpful, as it means we do not need a registry available to be able to download the image.

Running our application on Kubernetes

Let’s try starting the image immediately:

kubectl run mywebserver --image=my_webserver:0.1 --image-pull-policy=Never --restart=Never

The options are important. We name our Pod mywebserver (an underscore is illegal in the name, so we cannot name it my_webserver). Next we tell Kubernetes to use the image that we just built, my_webserver:0.1. If we used an actual registry, we could leave the :0.1 part out of there, as the image will automatically be tagged as latest as well and that’s what Kubernetes (or rather, Docker) will start if you do not provide your own version tag. But we’re not using a registry, so we have to be a bit more explicit. That’s fine though, being explicit with regards to the image version tag is a good thing, as it will allow you to roll back in case of problems.

The next option instructs Kubernetes to never try to download the image. That’s what we want, as we have the images locally and we’re not pushing them to Docker hub (which is the default server the Docker engine will try to download from if you do not provide a server name) or another registry for that matter. Lastly, we tell Kubernetes we do not want this Pod to automatically restart.

That restart option is important, because if we do not set it, Kubernetes will create a Deployment instead! Deployments are great, but not what we currently want to use. A third option would be to provide the --restart=OnFailure option , which would create a Job. Jobs are used for batch processing and have a slightly different use compared to Pods. We want a Pod, however, so we tell Kubernetes to Never restart. Both Deployments and Jobs might be a subject for another blog series, if you like. Let me know if you’d be interested in something like that!

Kubernetes networking

You now have your first home-made Pod running, which is great! But how to test that it works correctly? It’s running inside a VM, so we need to punch a hole through to there. The trouble is, this is pretty hard, as we’ve configured our code to listen on 127.0.0.1! So we need to change that first. Start with stopping the Pod:

$ kubectl delete pod mywebserver
$ kubectl get pods
NAME          READY     STATUS        RESTARTS   AGE
mywebserver   1/1       Terminating   0          27s

Keep running the kubectl get pods command every second or so until you no longer see the mywebserver Pod in the list. (Why does it take so long? We’ll get to that in a future blog in this series! And fix it as well…) Now we change the line before the last line in our my_webserver.py script, change it to:

httpd = http.server.HTTPServer(('0.0.0.0', 8080), MyWebpage)

(So simply change the 127.0.0.1 into 0.0.0.0.)

Now rebuild the image and remember to increase the tag:

$ docker build -t my_webserver:0.2 .
Sending build context to Docker daemon  3.584kB
Step 1 : FROM python:alpine3.6
 ---> 1b1ac8f23f73
Step 2 : WORKDIR /usr/src/app
 ---> Using cache
 ---> 5ef4a439346c
Step 3 : COPY my_webserver.py ./
 ---> af79248c42a3
Removing intermediate container 1d06297fe6b5
Step 4 : CMD python ./my_webserver.py
 ---> Running in 72c3032c5ce4
 ---> 02b0933a611f
Removing intermediate container 72c3032c5ce4
Successfully built 02b0933a611f

Listening on 0.0.0.0 will make it listen on all IP addresses that are available (to the container once it runs), which is exactly what we want. As a container is used to contain the process, you can expose your application without worrying about security at this time. Security is based on how we expose the application eventually in the production environment. Once you start creating your own applications, just make them listen on all interfaces. And on whatever port you like (for now… We’ll do something with that later on as well!).

Let’s run our Pod again and this time, let’s be a bit more explicit about the port we want to expose:

$ kubectl run mywebserver --image=my_webserver:0.2 --image-pull-policy=Never --restart=Never --port 8080
pod "mywebserver" created

This by itself doesn’t do anything worthwhile, except give a hint for our next step:

$ kubectl expose pod mywebserver --type=NodePort
service "mywebserver" exposed

Exposing a Pod allows you to actually access it, it creates a Service for that. A Service is an object within Kubernetes that keeps track of how you allow connections to a Pod. Services exist in several types, the simplest one being a ClusterIP. A ClusterIP creates an IP address within a Kubernetes that makes the Pods behind the Service available via an internal-only IP address. You won’t be able to connect to there (easily, by default) from outside the Kubernetes network. This is useful for backend services which only need to be accessed from within the cluster itself.

The second type and the one we use, is the NodePort. A NodePort actually creates a ClusterIP automatically, but also makes a connection to the ClusterIP on the port exposed by the Pod when you connect to any host within the cluster network on that specific port. This means that if you have a worker node with IP address 192.168.0.1 running the mywebserver Pod, you would be able to connect to http://192.168.0.1:8080 to access that port on the Pod. You actually have some freedom here, where you can assign different ports as NodePort and exposed port. (We’ll touch on that in a later installment of this series.)

The third type, LoadBalancer, only works when you use one of the supported cloud providers (like Google Cloud Engine or Amazon Web Services). It’ll talk to the cloud provider’s API to provide actual loadbalancing services for the exposed NodePort on all your workers. This does not work on minikube, but it’s good to be aware of this. That said, I would prefer that you learn how to use Ingress controllers and Ingress resources instead, as they tend to be more flexible and very much more portable between Kubernetes environments. (If you like to see a blog about those, let me know and I’ll see what I can do.)

There are a few other ones (ExternalName and ExternalIP), which are not used that much in my experience. They require additional external setup to work correctly.

Ok, so we now have our application exposed, how do we connect? Minikube to the rescue, as it has some tools which help make this a bit easier. Try the following:

$ curl $(minikube service mywebserver --url)

Hopefully, you’ll see our little HTML page! Minikube can return the correct IP address for the service in a way that you can easily use it to test. Feel free to just run minikube service mywebserver --url to see what it does.

Wrapping up

To recap this blog:

  • We created a little Python 3 webserver and made a Docker image out of it
  • We created a Pod on a minikube Kubernetes “cluster” that actually started the image
  • We created a Service (aka exposed a port) to allow us to connect to the running Pod
  • We noticed that we need to make sure the Pod listens on all available interfaces (well, I told you that was the case, but feel free to try and connect to a running my_webserver:0.1 Pod within the cluster to check if I was right!)
  • We noticed an annoying delay when deleting a Pod, which will address in our next installment.

Please let me know personally (via @timstoop) or via our company Twitter account (at @kumina_bv if this was helpful for you.

Continue with Part 2 of developing containerised friendly applications.

Tags: , , , , , , ,


Comments are closed.

Kumina designs, builds, operates and supports Kubernetes solutions that help companies thrive online. As Certified Kubernetes Service Partner, we know how to build real solutions.