So far in this virtualization series, we’ve discussed its history, the pre-eminence of server virtualization, the issues it has created, and how changing demands placed on our infrastructure are leading us to consider virtualizing all elements of our stack and move to a more software-defined infrastructure model.

 

In this post, we explore the growing importance of infrastructure as code and the part virtualization of our entire stack plays in delivering it.

 

Why Infrastructure as Code (IaC)

According to Wikipedia, IaC is

 

“the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. The IT infrastructure managed by this comprises both physical equipment such as bare-metal servers as well as virtual machines and associated configuration resources.”

 

Why is this important?

 

Evolving Infrastructure Delivery

Traditional approaches need us to individually acquire and install custom hardware, then configure, integrate, and commission all elements of it before presenting our new environment. This both introduced delays and opened risks.

 

As enterprises demand more flexible, portable, and responsive infrastructure, these approaches are no longer acceptable. Therefore, the move to a more software-defined, virtual hardware stack is crucial to removing these impediments and meeting the needs of our modern enterprise. IaC is at the heart of this change.

 

The IaC Benefit

If you need the ability to deliver at speed, at scale, and with the consistency of building, changing, or extending infrastructure following your best practices, then Infrastructure as Code is worthy of your consideration.

 

What does this have to do with virtualization? If we want to deploy as code, then our infrastructure must in its own way be code. Virtualizing is our way to software define it and provide the ability to deploy anywhere compatible infrastructure exists.

 

IaC in Action

How does IaC work? Public cloud perhaps provides us with the best examples, as we can automate cloud infrastructure deployment at scale and with consistency, unaffected by concerns of underlying hardware infrastructure.

 

If I wanted to create 100 virtual Windows desktops, I can, via an IaC template, call a predefined image, deploy on to a predefined VM, connect to a predefined network, and automate the delivery of 100 identical desktops into the infrastructure.

 

Importantly, the template means I can recreate the infrastructure whenever I like and, regardless of who deploys it, know it will be consistent, delivering the same experience every time.

 

The real power of this approach doesn’t come from a template only working in one place. The increasing amount of standard approaches will allow us to deploy the same template in multiple locations. When our template can be deployed across multiple environments, in our data center, public cloud, or a mix of both, it provides us with flexibility and portability crucial to modern infrastructure.

 

As our enterprises demand quick consistent response, at scale, across hybrid environments, then standardizing our deployment models is crucial. This can only be done if we standardize our infrastructure elements and this ties us back to the importance of virtualization in the delivery not only of our contemporary infrastructure but also the way we will deliver infrastructure in the future.

 

We started this series asking the question of whether virtualization would remain relevant in our future infrastructure. In the final part, we’ll look to summarize how future virtualization will look and why its concepts will remain a core part of our infrastructure.