During this series, we’ve looked at virtualization’s past, the challenges it has created, and how its evolution will allow it to continue to play a part in infrastructure delivery in the future. In this final part, I’d like to draw these strands together and share how I believe the concept of virtualization will continue to be a core part of infrastructure delivery, even if it’s a little different from what we’re used to.
Will Cloud Be the End of Virtualization?
When part one of this series was published, one question caught my attention: “Will public cloud kill virtualization?”
I hadn’t considered this for this series, but it intrigued me nonetheless.
It caught my attention not because I believe cloud will be its end, but because it’s played a significant part in redefining the way we think about infrastructure delivery, and consequently, how we need to think about virtualization.
This redefinition is not just a technical one; it’s also a fundamental shift in focus. When we discuss infrastructure in a cloud environment, we don’t think about vendor preferences, hardware configs, or individual component configs. We focus on the service, from virtual machine to AI and all in between. Our focus is the outcome—what the service delivers—not the technicalities of how we deliver it.
I believe this change in expectation drives the future evolution of virtualization.
Virtualizing All the Things
This future is based on virtualizing increasing elements of our infrastructure stack, not just servers, but networking and storage. It's about abstracting the capabilities of each of these elements from specific custom hardware and allowing them to be deployed in any compatible environment.
More than this, making more of our infrastructure software-based allows us to more easily automate deployment, deliver our infrastructure as code, and provide the flexibility and portability a modern enterprise demands.
Abstracting Even Further
However, this isn’t where the evolution of virtualization stops. We’re already seeing the development of its next phase.
New models like containerization and serverless functions are not only abstracting the reliance on hardware but also on operating systems. They’re designed to be ephemeral, created, or called on-demand, delivering their outcome before disappearing, to be recreated whenever or wherever they’re needed and not remaining in our infrastructure forever, creating an endless sprawl of virtual resources.
Virtualizing infrastructure has, over the last 20 years, transformed the way we deliver our IT systems and has allowed us to deliver models focused on outcomes, provide flexibility, and quickly meet our needs.
At the start of this series we asked whether virtualization has a future, and as we start to not only rethink how we deliver infrastructure, but also how we architect it, does virtualization have a place?
The new architectures we’re building are inspired by the large cloud providers, to be built at scale, deployed at speed, anywhere we want it, without much consideration of the underlying infrastructure and where appropriate to exist only as needed.
Virtualization remains at the very core of these new infrastructures, whether as software-defined, containers, or serverless. These are all evolutions of the virtualization concept and while it continues to evolve, it will remain relevant for some time to come.
I hope you’ve enjoyed this series. Thank you for your comments and getting involved. Hopefully it’s provided you with some new ideas around how you can use virtualization in your infrastructure now and in the future.
SolarWinds solutions are rooted in our deep connection to our user base in the THWACK® online community.
More than 150,000 members are here to solve problems, share technology and best practices, and directly
contribute to our product development process.
Learn more today by joining now.