How to develop Kubernetes-friendly containerised applications, part 2

In ‘How to develop Kubernetes-friendly containerised applications, part 1’, we created a simple webserver in Python 3. After packing it up in a Docker image within a virtual machine, we ran it on a local Minikube Kubernetes deployment (I dare not call it a cluster). We noticed how we had to expose the port. We also noticed how it took a little while (30 seconds to be exact) before the Pod was actually terminated. Today we’ll start off with solving that last “problem”. After that we will work a bit on the logging.

Graceful termination of containers

First, let’s start up the cluster again:

$ minikube start
Starting local Kubernetes v1.7.5 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
$ eval $(minikube docker-env)

Don’t forget that last command, as you won’t have access to the Docker engine within the Minikube VM otherwise.

So, regarding the delay. The delay is caused by Kubernetes wanting to give an application time to shut itself down. The container first gets a SIGTERM signal. If that does not shut it down within 30 seconds, a SIGKILL signal is sent. Our Python-based web server does not respond to this signal by default, which makes shutting it down fairly slow. This hampers your control over the network, as Kubernetes works best when it can shutdown (and start!) a container fairly quick.

Let’s do something about that. Luckily, it’s pretty easy in Python to catch a signal and actually do something with it. For completeness sake, I’ll provide the entire script, including my new changes, here:

import http.server
import sys
import threading
import signal

class MyWebpage(http.server.BaseHTTPRequestHandler):
    def do_GET(s):
        s.send_response(200)
        s.send_header('Content-Type', 'text/html')
        s.end_headers()
        s.wfile.write(b'''
<!DOCTYPE html>
<html>
    <head>
        <title>Hello!</title>
    </head>
    <body>
        <p>This is a demo page!</p>
    </body>
</html>''')


if __name__ == '__main__':
    def do_shutdown(signum, frame):
        threading.Thread(target = httpd.shutdown).start()
        sys.exit(0)

    signal.signal(signal.SIGTERM, do_shutdown)

    httpd = http.server.HTTPServer(('0.0.0.0', 8080), MyWebpage)
    httpd.serve_forever()

So we create a function that actually performs the shutdown in a separate thread. This is an implementation detail of the http.server.HTTPServer (I had to google it myself as well). Then we catch the specific signal and call the function. It’s as simple as that.

In a more elaborate application, this is where you can do cleanup stuff, like making sure no new connections are allowed, finishing running queries or logging some things. You want this to be fast, though. If the cleanup phase takes more than 30 seconds, the container will get a SIGKILL and there’s no way to catch that one. You can of course tell Kubernetes to wait a bit longer (this is called the grace period, which defaults to the aforementioned 30 seconds), but I would not recommend this unless you have a really, really good reason. It makes your cluster respond a lot slower if it needs to wait on shutdowns for a long time.

Let’s use the Dockerfile from the previous session again and build a new image.

$ docker build -t my_webserver:0.3 .
Sending build context to Docker daemon  3.584kB
Step 1 : FROM python:alpine3.6
 ---> 1b1ac8f23f73
Step 2 : WORKDIR /usr/src/app
 ---> Using cache
 ---> 5ef4a439346c
Step 3 : COPY my_webserver.py ./
 ---> f9202dc0c68e
Removing intermediate container 89f4281c0aee
Step 4 : CMD python ./my_webserver.py
 ---> Running in dd417b54f619
 ---> 963f5b0c5346
Removing intermediate container dd417b54f619
Successfully built 963f5b0c5346

And run it again:

$ kubectl run mywebserver --image=my_webserver:0.3 --image-pull-policy=Never --restart=Never --port=8080
pod "mywebserver" created

Now delete the Pod again and check how long it takes. We’ll use a one-liner for it, as the result is pretty fast (on my machine at least):

$ kubectl delete pod mywebserver && kubectl get pods && sleep 1 && kubectl get pods
pod "mywebserver" deleted
NAME          READY     STATUS        RESTARTS   AGE
mywebserver   1/1       Terminating   0          1m
No resources found, use --show-all to see completed objects.

As you can see, we now have a Pod that’s responsive to being terminated. The main advantage we have with that is that it’s now a lot faster to perform a deployment of a new image via rolling-updates.

Another aspect of being able to make fast deployments is the size of the image. I won’t reiterate what’s already been said about that in depth. Just keep in mind that you want your images as small as possible. A container should not be like a VM or VPS. The idea behind containers is process isolation, so you want the least amount of code in there, just enough to be able to run your applications. At this point in time, using Alpine based images seems to be a very good way to achieve this, but investigate your options. Size does matter, at least in this regard.

Logging

Let’s expose the Pod again (if you still have the mywebserver Service set up, you can skip this):

$ kubectl run mywebserver --image=my_webserver:0.3 --image-pull-policy=Never --restart=Never --port=8080
pod "mywebserver" created
$ kubectl expose pod mywebserver --type=NodePort
service "mywebserver" exposed

Visit you little Pod a few times:

$ curl $(minikube service mywebserver --url)

And now run the following command:

$ kubectl logs mywebserver

Nothing. Could it be that this Pod doesn’t log any access requests? Let try it locally by just running the code and checking what happens when we visit it. You’ll need two terminals for this. On the first one, start the app simply with python3 ./my_webserver.py. On the second, run this:

echo "GET / HTTP/1.0" | nc localhost 8080

Look there, the script does create log lines! Why aren’t they showing up when we request them via kubectl logs? Let’s do some investigating. First, stop the running my_webserver.py on your local machine (you can just press Ctrl+C in that terminal) and restart it with the following command:

$ python3 ./my_webserver.py 2>/dev/null

Visit it again and you’ll see that you no longer get any lines. In other words, the output is sent to stderr instead of stdout. That shouldn’t matter, though, as Docker provides access to both streams. If you want to be entirely sure the output is indeed sent to stderr, you can restart the script as follows:

(PYTHONUNBUFFERED=1 python3 ./my_webserver.py 3>&1 1>&2 2>&3) | sed -e 's/.*/^[[1;31m&^[[m/'

Where you replace both instances of ^[ with the actual key combination of Ctrl+v Esc. And all stderr output is now colored! Okay, so that’s just a plaything.

The PYTHONUNBUFFERED variable is important here, as Python by defaults buffered it’s output. Or rather, it’s Linux’ <stdio.h> that does buffering when streams do not correspond to terminals, which in this case they don’t (as we’re piping to a another process). This would cause it to delay sending the output to sed until the buffered was filled or a specific buffer flush was called. Might that be the solution for our Pod? Let’s try it, modify the Dockerfile so it looks like this:

FROM python:alpine3.6
ENV PYTHONUNBUFFERED=1
WORKDIR /usr/src/app
COPY my_webserver.py ./
CMD ["python", "./my_webserver.py"]

Rebuild our image and try it (just the commands here):

docker build -t my_webserver:0.4 .
kubectl run mywebserver --image=my_webserver:0.4 --image-pull-policy=Never --restart=Never --port=8080
curl $(minikube service mywebserver --url)
kubectl logs mywebserver

Cool, we have output. Let’s recap.

We need to make sure with Python (and probably other languages as well, as this is mostly a Linux thing) that output is send to stdout or stderr unbuffered. At that point, it’s captured by Docker and available with kubectl logs.

But I don’t like how log lines that are not errors are sent to stderr, they should be sent to stdout instead. Let’s change that in the code. Our next version will now look like this:

import http.server
import sys
import threading
import signal

class MyWebpage(http.server.BaseHTTPRequestHandler):
    def do_GET(s):
        s.send_response(200)
        s.send_header('Content-Type', 'text/html')
        s.end_headers()
        s.wfile.write(b'''
<!DOCTYPE html>
<html>
    <head>
        <title>Hello!</title>
    </head>
    <body>
        <p>This is a demo page!</p>
    </body>
</html>''')

    def log_message(self, format, *args):
        sys.stdout.write("%s - - [%s] %s\n" %
                         (self.address_string(),
                          self.log_date_time_string(),
                          format%args))


if __name__ == '__main__':
    def do_shutdown(signum, frame):
        threading.Thread(target = httpd.shutdown).start()
        sys.exit(0)


    signal.signal(signal.SIGTERM, do_shutdown)

    httpd = http.server.HTTPServer(('0.0.0.0', 8080), MyWebpage)
    httpd.serve_forever()

I simply override the log_message function with more or less the same function. You can see the original one here. As you can see, we simply copy the content from upstream and modify it a bit to use stdout instead. Let’s build and run our new script and see if we get some output this time:

$ docker build -t my_webserver:0.5 .
Sending build context to Docker daemon  3.584kB
Step 1 : FROM python:alpine3.6
 ---> 1b1ac8f23f73
Step 2 : WORKDIR /usr/src/app
 ---> Using cache
 ---> 5ef4a439346c
Step 3 : COPY my_webserver.py ./
 ---> 60734f7c79ea
Removing intermediate container 9665c56c21c8
Step 4 : CMD python ./my_webserver.py
 ---> Running in b0fd71032822
 ---> 0ec32ff472aa
Removing intermediate container b0fd71032822
Successfully built 0ec32ff472aa
$ kubectl delete pod mywebserver
pod "mywebserver" deleted
$ kubectl run mywebserver --image=my_webserver:0.5 --image-pull-policy=Never --restart=Never --port=8080
pod "mywebserver" created
$ curl $(minikube service mywebserver --url)
... (the output)
$ kubectl logs mywebserver

Nice! We have a log message and all is well. Our last change is a bit of a non-issue, as you cannot actually see the difference of the stdout or stderr streams in the kubectl logs output. But we can probably do something with this function… In a next installment of this series! Hopefully next week.

Wrapping up

To recap this post:

  • We noticed that SIGTERM wasn’t handled correctly, so shutting down a Pod took a long while. We fixed that.
  • We also noticed that we didn’t get any output when we checked the logs, so we switched Python to used unbuffered output instead.
  • As the default for http.server.HTTPServer is outputting to stderr and I didn’t like that, we switched that to stdout.

Please let me know personally (via @timstoop) or via our company Twitter account (at @kumina_bv) if this was helpful for you.

Continue with part 3 of this blog post series about developing containerized applications.

 



								

Tags: , , , , , , ,


Comments are closed.

Kumina designs, builds, operates and supports Kubernetes solutions that help companies thrive online. As Certified Kubernetes Service Partner, we know how to build real solutions.