I recently was working with a client that was deploying a cloud solution with vCloud Director (vCD). It seemed they were pursuing this primarily because of their significant investment in VMware and they want to stay up with the most recent products more than wanting to change how they purchased and managed infrastructure resources.
They traditionally had departments purchase physical servers when they needed them. By deploying vCD, they are essentially transitioning over to more of Infrastructure as a Service capability. This aligns well with more of a consumption-based funding model rather than buying physical resources for every new project, but they hadn't really planned on how they would address paying for resource allocation.
They simply didn't know the cost impact of this model and wanted to wait until they knew more. So their plan was to build it, start putting applications in systematically and figure out the ongoing resource funding later.
I find it interesting that I regularly see this approach to building clouds as much as the big planning and design projects.
Is this how you rolled out your cloud or are planning to roll out your cloud solution? What are the pro and cons to this approach? Is planning more important than potentially having to rebuild the entire environment?
Is this yet another benefit of virtualization where building and rebuilding environments do not necessarily mean any physical hardware changes, so organizations can deploy and re-deploy faster?
Hrm, this is a very interesting topic...
I agree that this is very likely becoming a very common thing and after thinking about it, I think that it's OK. Virtualization and/or Clouds are very quickly becoming the new Normal for just about every type of environment; because of this companies know that they need to move forward with these new technologies. When sitting at this juncture they have two options; spend a bunch of time putting together a plan around something they don't fully understand (a plan that will likely significantly change as they learn more) or jump in and figure it out as they go. Since the virtualization technologies make your environment so flexible it's difficult to make an argument against just jumping in at which point your only major decision is which Hypervisor to go with.
One of the major downsides we are seeing in the industry as a result of these organically grown virtualized infrastructures is sprawl. Without the proper understanding and necessary tools sprawl in these virtualized environments can be a major problem and ultimately cost you a lot of money in both licensing and in troubleshooting the resulting problems.
Based on this I guess my conclusion is as follows...
No need to spend a ton of time planning, jump in and learn the technology because its the direction everything is moving it and it's moving very quickly. Just be aware of the pitfalls and make sure you are equipped with the proper tools to understand the environments you are building.
At least that's my $0.02
Rightfully so, there is much more of a "What's the worst thing that could happen?" approach with virtual machines. Even if you really get the cloud design underneath really wrong, the VMs can simply be migrated around when you get it figured out. As long as these VMs are internal and you're not exposing machines from a security standpoint. I'm not seeing a whole lot wrong with this approach as long as you don't throw big dollars at your first attempt.
Our virtulization project was done partially due to the fact that a lot of our infrastructure had out dated and under utilized physical hardware. We played around in the kiddie pool (2004 - 2007 on GSX) for quite a while so we could get comfortable with the technology. Then went whole hog (ESX) to get a check from an Edison rebate program and retire the outdated hardware. We had a plan when we did it, but the plan keeps changing as the times change.
SSDs in a SAN was absolutely outrageous in cost and completely unthinkable, but now I have about 1.5TB of it.
vApps... whats that, now I don't know how I would keep tabs on everything without it.
MBR is fine, we don't need GPT.
8MB block sizes are good because you maximize your possible disk size (there won't be a need to standardize to a 1mb block later to take advantage of SAN assisted actions).
2TB - 512, is the biggest you'll get, if you need more, split the data up or use (ACK!) dynamic disks.
SAN assisted clone/move, huh... won't the SAN always do the read/write?
Those could be thoughts in any plan made 5 years ago, but today they would be considered dumb.
Maybe some of these technologies were thought of, but they wouldn't be part of any "plan" until technology progressed to today. Your plan and direction will change, or you will get left behind and won't be able to take advantage of new features that make IaaS achievable. So the whole "plan" should be there as a guide for the next 6-12 months, but don't etched it in stone or ever decide upon a finish line. Take it in steps/phases with the ability to re-purpose, re-direct and keep moving forward. One of the best things about virtualization is its agility to adapt to new technologies because you don't have to forklift gaining yourself a giant headache trying to implement the next revolutionary technology shift.
Great insight into a real deployment netlogix. To your point, building out a cloud is such a large undertaking, it's important to get something in place to learn more about it. Picking up incremental knowledge instead of having to try to master so many moving parts seems like it is a more realistic approach in the long run.
SolarWinds solutions are rooted in our deep connection to our user base in the THWACK® online community. More than 150,000 members are here to solve problems, share technology and best practices, and directly contribute to our product development process.