There’s a revolution underway with application deployment, and you may or may not be aware of it. We’re seeing a move by businesses to adopt technology the large public cloud providers have been using for many years, and that trend is only going to increase. In my previous post on digital transformation, I looked at examining the business processes you run and how to position a new digital strategy within your organization. In this article, we look at one of the more prevailing methods of modernizing and paying off some of that technical debt. The large cloud providers and Google in particular have been deploying applications using a technology called containerization for years to allow them to run and scale isolated application elements and gain greater amounts of CPU efficiency.


What is Containerization?

While virtualization is hardware abstraction, containerization is operating system abstraction. Containerization has been growing in popularity, as this technology can get around some of the limitations of machine virtualization, like the sheer size of operating systems, the necessary overheads associated with getting the operating system up and keeping them running, and, as previously mentioned, the lower CPU utilization. (Remember it’s the application we really want to be running and interacting with; the operating system is just there to allow it to stand up).


Benefits of Containerization

A key benefit of containerization is that you can now run multiple applications within the user space of a Linux operating system kept separate from the Kernel. While each application requires its own dedicated operating system when it’s virtualized, containers hold only everything required to run the application (encapsulated runtime environment, if you will). Because of this encapsulation, it means that the application doesn’t see processes or resources outside of itself. As isolation is done down at the kernel level, this then removes the need for each application to have their own bloated operating system. It also allows for the ability to move it without any reliance on underlying operating systems and hardware, which in turn gives greater reliability for the application and removal of migration issues. As the operating system is already up and running and there’s no hypervisor getting in the way of the execution path, you can spin up a single container or thousands within seconds.


One of the earliest mainstream use cases of containers with which the wider audience may have interacted is Siri, Apple’s voice and digital assistant. Each time you record some audio, it’s sent to a container to be analyzed and a response generated. Then the application quits, and the container is removed. This helps explain why you can’t get a follow-up query to work with Siri. Another key benefit of containerization is its ability to help speed up development. Docker’s slogan "run any app anywhere" comes from the idea that if a developer builds an application on his laptop and it works there, it should run on a server elsewhere. This is the origin of the idea around improved reliability. In turn, it allows the development environment to now need to be exactly like production, therefore reducing costs and tackling and resolving the real issues we see with applications.


One of the major advantages of moving to the cloud  is elasticity, and a great way to make use of this is to start using containers. By starting on-premises with legacy application stacks and then slowly converting, or refactoring, their constituent parts to containers, you can make the transition to a cloud provider with greater ease. After all, containerization is basically a way of abstracting all the differences in OS distributions and undying infrastructure by encapsulating the application files’ libraries and environment variables and its dependencies into one neat package. Containers also help with application management  issues by breaking them up into smaller parts that function independently. You can monitor and refine these components, which leads us nicely to microservices.



Applications separated into microservices are easier to manage, as you can alter various smaller parts for improvement while not breaking the overall application. Or, individual instances can be brought online immediately when required to meet growing demand.


By using microservices, you become more agile as you move to independently deployable services and the carving up of an application into smaller pieces. It allows for independent developing, testing, and deployment of a service on a more frequent schedule. This should allow you to start paying off some of that previously discussed technical debt.


Understanding the Market

There are several different types of container software, and it seems sometimes this subdivision is misunderstood when talking about the various products available. These are container engine software, container management software, container monitoring software, and container network software. The main bit of confusion in the IT market comes between container engine platforms and orchestration/management solutions. Docker, Kubernetes, Rancher, Amazon Elastic Container Service, Red Hat OpenShift, Mesosphere, and Docker Swarm are just some of the more high-profile names thriving in this marketplace. Two of the main players are Docker and Kubernetes.


Docker is a container platform designed to help build, ship, and run applications. At its heart, the Docker Engine is the core container runtime environment and the foundation for running containers. Docker Swarm is part of their enterprise suite and provides orchestration and management functionality similar to that of Kubernetes. Docker Swarm and Kubernetes are interchangeable if you’re using the Docker Engine.


Kubernetes is an open-source container management platform system that grew out of a software development project at Google. It can deploy, manage, and scale containerized applications on a planetary scale.


There is still some debate going on as to whether virtualized or containerized applications are more or less secure than the other, and while there are reasons to argue for each, it’s worth noting these two technologies can be used in conjunction each other. For instance, a VMware has Photon OS, which is a container system that can run on vSphere.


When it comes to dealing with containers, there are some design factors and ideals that differ from those of running virtual machines. Instances are disposable. If an instance stops working, just kill it and start another. Log files are saved externally from the instance, so they can be analyzed later or collated into a central repository. Applications need to have the ability to retry operations rather than crashing. This allows for new microservices to be started if demand cannot be met. Persistent data needs to be treated as special and therefore how it is accessed and stored needs to come into consideration. Containers consist of two parts: an image file, which is like a snapshot of the required application, and a configuration file. These are read-only, and therefore you need to store data elsewhere or it will be deleted on clean-up. By planning for redundancy and scalability, you are planning on how best to help the container improve over time. You must have a method to check that the container is both alive and ready, and if it’s not responding, to quickly kill that instance and start another.


Automation and APIs

Application programmable interfaces (APIs) are selections of code exposed by the application to allow for other code or applications to control its behavior. So, you have the CLI and GUI for human interaction with an application and an API for machines. By allowing other machines to access APIs, you start to gain the ability to automate the application and infrastructure as a whole. There are tools available today to ensure applications are in their desired state (i.e., working as intended with the correct settings enabled, etc.) and modify them if necessary. This interaction requires the ability to access the application’s API to make these changes and to interrogate it to see if it’s indeed at the required desired state.


Containers and Automation

As mentioned previously, the ability to spin up vast numbers of containers with next to no delay to meet spikes in demand requires a level of monitoring and automation that removes the need for human interaction. Whether you’re Google doing several billion start-ups and stops a week, a website that finds cheap travel, or a billing application that needs to scale depending on the fiscal period, there’s a benefit to looking at the use of containers and automation to meet the range of demands.


As we move to a more hybrid style of deploying applications, having the ability to run these as containers and microservices allows you the flexibility to transition this to the best location possible for the workload, which may or may not be a public cloud, without fear of breaking the service. Whether you start in one and move to another, this migration will soon be viewed in the same way a version update happens in an environment, and it will be just another task that needs to be undertaken during an application’s lifespan.


Containers offer a standardized way of deploying applications, and automation is a way to accomplish repetitive tasks and free up your workforce. As the two become more entwined in your environment, you should start to see a higher standard and faster deployment within your infrastructure. This then leads you on the journey to continuous integration and continuous delivery, but that’s a story for another day.