To the cloud and beyond

The last couple of weeks we talked about the cloud and the software defined data center, and how it all comes into your it infrastructures. To be honest I understand a lot of you have doubts when talking and discussing cloud and SDDC. I know the buzzword lingo is strong, and it seems the marketing teams come up with new lingo every day. But all-in all I still believe the cloud (and with it SDDC) is a tool you can’t just reject because of looking at it as a marketing term.

One of the things mentioned was that the cloud is just someone else’s computer and that is very true, but saying that is forgetting some basic stuff. We have had a lot trouble in our datacenters, and departments sometimes needed to wait months before their application could be used, or the change they asked for was done.

Saying your own datacenter can do the same things as the AWS/AZURE/GOOGLE/IBM/ETC datacenters can do, is wishful thinking at best. Do you get your own CPU’s out of the Intel factory, do you own the microsoft kernel, and I could continue with much more you will probably never see in your DC. And don’t get me wrong, I know some of you work in some pretty amazing DC’s.

Let’s see if we can put it all together and come to a conclusion most of all can share. First I think it is of utmost importance to have your environment running at a high maturity level. Often I see the business running to the public cloud and complaining about their internal IT because of lack of resources and money to perform the same levels as in a public cloud. But throwing all your problems over the fence into the public cloud won’t fix your problem. No it will probably make it even worse.

You’ll have to make sure you’re in charge of your environment before thinking of going public, if you want to have the winning strategy. For me the Hybrid cloud or the SDDC is the only true cloud for much of my customers, at least for the next couple of years. But most of them need to get their environments to the next level, and there is only one way to do that.

Know thy environment….

We’ve seen it with outsourcing, and in some cases we are already seeing it in Public Cloud, we want to go in but we also need the opportunity to go out. Let’s start with going in:

Before we can move certain workloads to the cloud we need a to know our environment from top to bottom. There is no environment where nothing goes wrong, but environments where monitoring, alerting, remediation and troubleshooting is done at every level of the infrastructure and where money is invested to keep a healthy environment, normally tend to have a much smoother walk towards the next generation IT environments.

dart.png

The DART framework can be used to get the level needed for the step towards SDDC/Hybrid Cloud.

We also talked about getting SOAP, Security, Optimization, Automation and Reporting to make sure we get to the next level of infrastructure, and it is as much importance as the DART framework. If you want to create the level IT environment you need to be in charge of all these bulletpoints. If you are able to create a stable environment, on all these points, you’re able to move the right workloads to environments outside your own.

Screen Shot 2016-09-20 at 20.24.28.png

I’ve been asked to take a look at Solarwinds Server and Application Monitor (SAM) 6.3 and tell something about it. For me it is just one of the tools you need in place to secure, optimze and automate your environment and show and tell your leadership what your doing and what is needed.

I’ll dive into SAM 6.3 a bit deeper when I had the time to evaluate the product a little further. Thanks for hanging in there, and giving all those awesome comments. There are many great things about Solarwinds:

  1. 1. They have a tool for all the things needed to get to the next generation datacenter
  2. 2. They know having a great community helps them to become even better

So Solarwinds congrats on that, and keep the good stuff coming. For the community, thanks for being there and helping us all get better at what we do.

  • Many smaller things could be outsourced to the cloud I suppose.  The bean counters aren't going to ignore the potential cost savings for some.  In many industries it will be much much harder to ever get anything onto shared infrastructure... the security implications are just too severe.

  • Trusting security for resources that are not 100% in your physical and virtual control is where ASP's fall short for me.

    I can physically secure my DC's, I can monitor everything happening within them through security cameras and NPM and other Solarwinds suites and tools.  I know what's happening there, in and out.  No so, the access & traffic at an ASP's location.

    Worse, trusting their monitoring, their people, remote access into their facility from others, is challenging at best.

    When an ASP provides SLA's with significant compensation/penalties for performance or security issues, when they allow customers to do full monitoring of everything within the ASP's site (including bandwidth utilization, NetFlow info, CPU & memory on switches/routers/servers, etc. . . .

    Last, but perhaps most frightening, is when an ASP like Microsoft builds an environment like Azure, defines several million IP addresses that need to have special access allowances in/out through Azure clients' firewalls, and then other Azure users can spin up a test VM environment within that same range of addresses and begin probing/hacking other Azure clients through those firewall rules allowing Azure address ranges.  And the VM is gone in two hours, along with those who used it to hack other Azure clients.

    It doesn't inspire confidence.  If I were a little guy, I'd want to leverage ASP's resources, but I'd be trusting them with my livelihood and the data of my customers.  My group is a little larger, with six of our own DC's, but Microsoft's incentive of $5M for moving to O365 in Azure is a very big carrot.  How can you get that carrot without opening yourself up to trusting MS in an environment you can't control--and that they aren't controlling properly either?

  • Before we can move certain workloads to the cloud we need a to know our environment from top to bottom. There is no environment where nothing goes wrong, but environments where monitoring, alerting, remediation and troubleshooting is done at every level of the infrastructure and where money is invested to keep a healthy environment, normally tend to have a much smoother walk towards the next generation IT environments

    (I hope I did that quote right...)

    Honest question.... how is that approach different that private cloud and physical servers? To me that sounds like fundamentally sound discipline to administrative IT.

  • I think part of the problem with the public cloud is that it's difficult to understand and work with and you really don't get any guidance from the companies providing them.  We are a Hybrid Cloud service provider but our business has been built around people.  We are people helping other people design new infrastructures or migrate infrastructures to a solution that meets their needs.  The important thing here is finding other people that do understand this stuff that can help you.  The technology has become too complicated for any one person to really understand or manage so we need teams or groups of people to wrangle this stuff.  The public cloud may be approachable if all you need are a few basic services but if you want to do anything complex, good luck figuring it all out because it certainly isn't easy.

  • Granted the "cloud" is a tool.  To me it is more of "The Fog" because there are so many unseen things that may not appear until it is very late in the game.  Of course fog is truly a cloud that is on the ground so the analogy is not incorrect.  Drive around in the fog and you may not see an obstacle until it is too late.  The same is true for cloud computing.  You can't control various aspects of the environment....

Thwack - Symbolize TM, R, and C