Select Page

We are big fans of containerisation at Future Digital. It is the future of modern software development. We are currently building our new platform on top of web-scale technologies including Docker and Kubernetes.

In this blog, I aim to explain what containerisation is, why we are using it and what advantages it offers.

 

History of Containers

Shipping containers are ubiquitous, standardised and available anywhere in the world. They are extremely easy to use. Simply open them up, load in your cargo and lock the doors. They can be moved using different modes of transport, including road, rail and sea, across the world. At no point during the journey do the contents need to be repacked or modified in any way.

Each shipping container is secure, and the contents are isolated from that of the others; the container full of lithium can be safely sit next to the container full of bottled water without any risk of a reaction. Once a spot on a container ship has been secured, you can be confident that there will be room for your cargo for the duration of whole trip. There is no way for a neighbouring container to steal more than its fair share of space.

The impact of standardised shipping containers was a dramatic fall in the cost of shipping goods overseas, from around $5 per tonne to $0.16!

Software containers will fulfil a similar role for applications.

Software Containers

A software container is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it; code, runtime, system tools, system libraries and settings. Containerised applications will always run the same regardless of the environment its hosted in. Once the container has been defined, that image is then used to create containers in any environment. From the developers laptop, to the UAT environment and finally into production in the cloud. This consistency is very useful.

Anyone who has worked in the software development industry for any length of time will have encountered the classic situation. The code is failing in production but the developer opens their local development environment and exclaims:

Works on my machine

The root cause of the failure will normally be attributed to the environment the code is running in. With application containers, the developer can spin up a container to replicate the issue and be confident that it exactly matches what is running in production.

 

Virtualisation vs Containerisation

How is software containerisation different from virtualisation? Both technologies allow you to divorce the workload from the underlying physical hardware but they both go about it very differently.

Containers vs. Virtual Machines

Virtualisation is a tried and tested approach, and emerged as was a way to more efficiently use server resources and reduce server sprawl. A virtual machine is a complete system which limits its portability. A typical virtualised server will consist of a host operating system, which interacts with the physical server hardware, and a hypervisor which then runs a number of VM guest instances. Virtualisation is very wasteful, typically each VM requires a full OS licence, there is a large amount of duplication between VMs; wasting server memory and limiting the number of VMs a physical server can host.

Containerisation emerged in early 2000 but really exploded when Docker launched in 2013. It allows each independent virtual instance to share a single host operating system, reducing duplication and wasted resources. Each container only holds the application and any related binaries or libraries. It often referred to as OS-level virtualisation, and because containers are super lightweight, they are more portable, faster to backup and restore and require much less memory.

Compared to the number of Virtual Machine based application instances, the difference in utilisation can be huge. It is possible to fit anywhere between 10 to 100 times the number of container instances on a given physical server.

With all of Future Digital’s new microservices being containerised, that’s potentially a lot of containers to manage!

 

Container Orchestration

Container orchestration is all about how we deploy and manage our containers. Done well, it gives us the freedom to forget about what server will host a particular container or how that container will start, monitored and killed.

We decided to use Kubernetes (K8S) for orchestration. K8S is built upon Google’s 15 years of experience of running production workloads along with best-of breed ideas and practices from the community. It fully automates the process of deploying and managing multi-container applications at scale.

K8S offers our developers some really powerful features out of the box including:

  • Horizontal scaling – We can scale the platform up and down automatically based on CPU usage
  • Self-healing – Any containers that fail can be restarted automatically. Containers can be monitored with health checks and killed if they become non-responsive.
  • Automated roll-out and rollback – Changes can be rolled out progressively whilst application health is monitored. If something goes wrong, we can automatically rollback still maintaining application uptime
  • And much much more…

Why?

There are many reasons why we have decided to use this technology stack but I will highlight some of the most important ones.

Primarily, for our customers, we will be able to provide, cost effectively, the resources required dynamically to cope with demand as it peaks during the course of the school day. Overnight, when our demand is traditionally low, we will be able to scale back the platform rather then remain over-provisioned. All this will happen automatically, so if someone wakes up in the middle of the night and wants to run a huge report, the platform can scale up to cope without our developers having to intervene!

For our developers, we can ship code faster, more often with more confidence. Microservices and containerisation removes a lot of the deployment headaches associated with monoliths. We are able to take advantage of blue-green deployments, de-risking the deployment process entirely and reducing the likelihood of downtime. This means we can ship changes during the day, rather then having to wait for a maintenance window overnight. Debugging production issues becomes more efficient, we can spin up an exact copy of production locally to diagnose and fix the issue.

For our support team, we have better visibility of what is happening across the system. K8S helps us to deal with outages, automatically restarting containers that fall over and monitoring the overall health of the platform. In the longer term, our platform will become more reliable, allowing our support team to be more proactive.

google6614b062b141548b.html