Posts Tagged ‘amazon’

Birdwatcher: Accessing Calico/BIRD metrics through Prometheus

Friday, October 28th, 2016

At Kumina we maintain a Kubernetes setup running on Amazon EC2. For the low-level networking between containers, we make use of Calico. Calico configures all of our EC2 systems to form a mesh network. The systems in this mesh network all run an instance of the BIRD Internet Routing Daemon.

One of the problems we ran into with Calico is that it’s sometimes hard to get a holistic view of the state of the system. Calico ships with a utility called calicoctl that can be used to print the state of a single node in the mesh, but using this utility can easily become laborious as the number of EC2 instances increases.

Given that we already make strong use of Prometheus for our monitoring, we’ve solved this by writing a tool called Birdwatcher that exports the metrics generated by BIRD in Prometheus’ format. This allows us to put alerts in place for when an excessive number of changes to routes occur, or when routes simply fail to work for a prolonged period of time.

Today we’re happy to announce that Birdwatcher is now available on our company’s GitHub page. If you’re a user of both Calico and Prometheus, be sure to give it a try. Enjoy!

 

screen-shot-birdwatcher

awssyncer: an automatic syncer for Amazon S3 that makes use of inotify

Friday, September 16th, 2016

# awssyncer: Continuous syncing of local files into Amazon AWS S3.

At Kumina, we’re strong users of the Amazon AWS cloud computing platform. We’ve been using EC2 instances for quite some time and are currently working on expanding this by making use of Kubernetes.

While setting this up, we’ve noticed that we sometimes want to run jobs for which we want to keep track of small amounts of local state (i.e., files on disk). In this case we’ve decided that we want to store this data in S3, but do want to have it efficiently available through the local file system. The advantage of using S3 for this purpose is that it’s globally replicated, unlike EBS.

For this purpose we’ve developed a new utility called awssyncer, which is as of now available on GitHub! awssyncer is a utility written in C++ that uses Linux’s inotify to keep track of local modifications to a directory on disk. The purpose of this utility is to use these inotify events to determine which files need to be synced back into S3. This utility thus provides continuous one-way sychronisation from local disk to S3. A simple container startup script is used to sync files from S3 to local disk on startup.

Though we realise that this utility is fairly specific to our situation at hand, we do invite all of you to give it a try. Feel free to get in touch in case you have any questions or discover any bugs!

Publishing EC2 scripts on GitHub

Friday, April 29th, 2011

We’re glad to announce that we’ve published our set of EC2 scripts on GitHub! The kuminami repository contains current versions of the code described in these two blog posts:

In addition, the repository also contains the infrastructure to package the instance spawn script and the DNS syncer as a Debian package.

Automatically creating entries in PowerDNS for Amazon EC2 instances

Monday, April 18th, 2011

By default, instances created on Amazon EC2 will have a randomly assigned IPv4 address. It is however possible to pin instances to a preallocated IP address. These IP addresses are called Elastic IPs. Because IPv4 addresses are becoming very scarce, Amazon only allows a customer to allocate up to five Elastic IPs. Even though Elastic IPs are free to use when attached to a running instance, they come at a cost of $0.01 per hour unused.

Because of these two limitations, we have decided to simply use the randomly assigned addresses, which is why we’ve written a script to automatically create DNS entries in PowerDNS for instances managed through EC2. (more…)

Kumina into the cloud; creating Amazon EC2 images

Wednesday, April 13th, 2011

At Kumina we have already gained lots of experience when it comes to deploying and administering Debian installations on virtualisation platforms such as KVM and Xen. In all our setups, we also perform administration of the Dom0 — the operating system running the virtualisation software. Lately we have also been looking at cloud computing solutions, such as Amazon EC2. One of the advantage of cloud computing is that it’s easy to provide scalability. One can simply spawn new system instances on demand. Unfortunately the lack of administrative access to Dom0 can make it harder to debug and recover instances.

In order to make use of Amazon EC2 to its full potential, it is important that we can quickly spawn Debian installations that are automatically configured using Puppet. We accomplish this by creating our own Kumina-branded Amazon Machine Image (AMI). Compared to the stock Amazon Linux and Ubuntu images, it uses a different approach. Instead of creating an image of a pre-installed Debian system, we have created a relatively small system (about 12 MB), which uses Debootstrap to store an up-to-date installation on the provided storage space. When finished, it stores a set of pre-generated SSL certificates for Puppet in the right place and reboots into this new Debian installation. From within this system, we run Puppet to install additional pieces of software and configure the system correctly.

(more…)