Skip navigation

Geek Speak

9 Posts authored by: arjantim
arjantim

To the cloud and beyond

Posted by arjantim Sep 20, 2016

The last couple of weeks we talked about the cloud and the software defined data center, and how it all comes into your it infrastructures. To be honest I understand a lot of you have doubts when talking and discussing cloud and SDDC. I know the buzzword lingo is strong, and it seems the marketing teams come up with new lingo every day. But all-in all I still believe the cloud (and with it SDDC) is a tool you can’t just reject because of looking at it as a marketing term.

 

One of the things mentioned was that the cloud is just someone else’s computer and that is very true, but saying that is forgetting some basic stuff. We have had a lot trouble in our datacenters, and departments sometimes needed to wait months before their application could be used, or the change they asked for was done.

 

Saying your own datacenter can do the same things as the AWS/AZURE/GOOGLE/IBM/ETC datacenters can do, is wishful thinking at best. Do you get your own CPU’s out of the Intel factory, do you own the microsoft kernel, and I could continue with much more you will probably never see in your DC. And don’t get me wrong, I know some of you work in some pretty amazing DC’s.

 

Let’s see if we can put it all together and come to a conclusion most of all can share. First I think it is of utmost importance to have your environment running at a high maturity level. Often I see the business running to the public cloud and complaining about their internal IT because of lack of resources and money to perform the same levels as in a public cloud. But throwing all your problems over the fence into the public cloud won’t fix your problem. No it will probably make it even worse.

 

You’ll have to make sure you’re in charge of your environment before thinking of going public, if you want to have the winning strategy. For me the Hybrid cloud or the SDDC is the only true cloud for much of my customers, at least for the next couple of years. But most of them need to get their environments to the next level, and there is only one way to do that.

 

Know thy environment….

 

We’ve seen it with outsourcing, and in some cases we are already seeing it in Public Cloud, we want to go in but we also need the opportunity to go out. Let’s start with going in:

 

Before we can move certain workloads to the cloud we need a to know our environment from top to bottom. There is no environment where nothing goes wrong, but environments where monitoring, alerting, remediation and troubleshooting is done at every level of the infrastructure and where money is invested to keep a healthy environment, normally tend to have a much smoother walk towards the next generation IT environments.

dart.png

The DART framework can be used to get the level needed for the step towards SDDC/Hybrid Cloud.

 

We also talked about getting SOAP, Security, Optimization, Automation and Reporting to make sure we get to the next level of infrastructure, and it is as much importance as the DART framework. If you want to create the level IT environment you need to be in charge of all these bulletpoints. If you are able to create a stable environment, on all these points, you’re able to move the right workloads to environments outside your own.

Screen Shot 2016-09-20 at 20.24.28.png

 

I’ve been asked to take a look at Solarwinds Server and Application Monitor (SAM) 6.3 and tell something about it. For me it is just one of the tools you need in place to secure, optimze and automate your environment and show and tell your leadership what your doing and what is needed.

 

I’ll dive into SAM 6.3 a bit deeper when I had the time to evaluate the product a little further. Thanks for hanging in there, and giving all those awesome comments. There are many great things about Solarwinds:

 

  1. 1. They have a tool for all the things needed to get to the next generation datacenter
  2. 2. They know having a great community helps them to become even better

 

So Solarwinds congrats on that, and keep the good stuff coming. For the community, thanks for being there and helping us all get better at what we do.

In the last couple of years the business is constantly asking IT departments how the public cloud can provide services that are faster, cheaper and more flexible than the in house solutions. I’m not going to argue with you if this is right or not, it is what I hear at most of my customers and in a couple of cases the answer seams to be automation. The next-gen data centers that leverage a software-defined foundation, use high levels of automation to control and coordinate the environment, enabling service delivery that will meet business requirements today and tomorrow.

 

For me the software-defined data center (SDDC) provides an infrastructure foundation that is highly automated for delivering IT resources at the moment they are needed. The strength of  SDDC is the idea of abstracting the hardware and enabling its functionality in software. Due to the power of hardware these days, it’s possible to use a generic platform with specialized software that enables the core functionality, whether for a network switch or a storage controller. Network, for example, were once specialized hardware appliances; today, they more and more are virtualized within the virtual with specialized software. Virtualization has revolutionized computing and allowed flexibility and speed of deployment. In the IT infrastructures of these days, virtualization enables both portability of entire virtual servers to off-premises data centers for disaster recovery and local virtual server replication for high availability. What used to require specialized hardware and complex cluster configuration can now be handled through a simple check box.

 

By applying the principles behind virtualization of compute to other areas such as storage, networking, firewalls and security, we can use its benefits throughout the data center. And it’s not just virtual servers: entire network configurations can be transported to distant public or private data centers to provide full architecture replication. Storage can be provisioned automatically in seconds and perfectly matched to the application that needs it. Firewalls are now part of the individual workload architecture, not of the entire data center, and this granularity against threats inside and out, yielding unprecedented security. But what does it all mean? Is the SDDC some high-tech fad or invention? If you ask me: absolutely not. The SDDC is the inevitable result of the evolution of the data center over the last decade.

 

I know there is a lot of marketing fluff around the datacenter, and Software Defined is one of them, but for me the SDDC is for a lot of companies the perfect fit for this time. What the future will bring, who knows where we stand in 10 years! The only thing we know is that a lot of companies are struggling with the IT infrastructure and need help in bringing the environment to the next level. SDDC is a big step forward for most (if not all) of us, and call it what you like but I’ll stick to SDDC

arjantim

The Hybrid Cloud

Posted by arjantim Aug 23, 2016

Keeping it easy this post. It'll be just a small recap to build some tension for the next few posts. In the last two posts, I talked a little about the private and public cloud, and it is always difficult to write everything with the right words. So, I totally agree with most of the comments made, and I wanted to make sure a couple of them were addressed in this post. Let’s start with the cloud in general:

 

ThereIsNoCloud.png

 

A lot of you said that the cloud is just a buzzword (or even just someone else’s computer).

 

I know it’s funny, and I know people are still trying to figure out what cloud is exactly, but for now we (our companies and customers) are calling it cloud. And I know we techies want to set things straight, but for now let’s all agree on calling it cloud, and just be done with it (for the sake of all people that still see the computer as a magical box with stardust as its internal parts, and unicorns blasting rainbows as their administrators.)

 

The thing is, I like the comments because I think posts should always be written as conversation starters. We are here to learn from each other, and that’s why we need these comments so badly.

 

The private cloud (or infrastructure) is a big asset for many of the companies we work for. But they pay a lot of money to just set up and maintain the environment, where the public cloud just gives them all these assets and sends a monthly bill. Less server cost, less resource cost, less everything, at least that’s what a lot of managers think. But as a couple of the comments already mentioned, what if things go south? What if the provider goes bankrupt and you can't access your data anymore?

 

In the last couple of years, we've seen more and more companies in the tech space come up with solutions, even for these kinds of troubles. With the right tools, you could make sure you’re data is accessible, even if your provider goes broke and the lights go out. Companies like Zerto, Veeam, Solarwinds, VMware and many more are handing you tools to use the clouds as you want them, while still being in control and able to see what is going on. We talked about DART and SOAR, and these are very important in this era and the future ahead. We tend to look at the marketing buzz and forget that it's their way of saying that they often don't understand half of the things we do or say, and the same goes for a lot of people outside the IT department. In the end they just want it KISS, and that where a word like "cloud" comes from. But let's go back to hybrid.

 

So what is hybrid exactly? A lot of people I talk to are always very outspoken about what they see as hybrid cloud. They see the hybrid cloud as the best of both worlds, as private and hybrid clouds combined. For me, the hybrid cloud is much more than that. For me, it can be any combination, even all public, but shared among multiple providers (multi-cloud anybody?!? ), or private and public clouds on-premises, and so on. In the end, the cloud shouldn't matter; it should just be usable.

 

For me, the hybrid solution is what everybody is looking for, the one ring to rule them all. But we need something software-defined to manage it all.

 

That's why my next post will be about the software-defined data center. It's another buzzword, I know, but let's see if we can learn a bit more from each other on where the IT world is going to, and how we can help our companies leverage the right tools to build the ultimate someone else’s computer.

 

See you in Vegas next week?!?

arjantim

The Public Cloud

Posted by arjantim Aug 9, 2016

A couple of years ago nobody really thought of Public cloud (although that might be different in the US), but things change, quickly. Since the AWS invasion of the public clo space we’ve seen a lot competitors try to win their share in this lucrative market. Lucrative is a well chosen word here as most of the businesses getting into this market take a big leap of faith, as most of them have to take their losses for the first couple of years. But why should Public Cloud be of any interest to you, and what are the things you need to think about? Let’s take a plane and fly to see what the public cloud has to offfer, and if it will take over the complete datacenter or just parts of it?

 

Most companies have only one purpose and that is to make more money then they spend… And where prices are under pressure there is really nly one thing to do, cut the cost. A lot of companies see the public cloud as cutting cost, as you’re only paying for the resources you use, and not for all the other stuff that is alsoo needed to run your own “private cloud”. And because of this they think the cost of public cloud is cheaper than building their datacenters every 5 years or so.

 

To be honest, in a lot of ways the companies are right. Cutting cost by moving certain workloads to the public cloud will certainly help to cut cost, but it might also be a great test/dev environment. The thing is you need to determine the best public cloud strategy per company, and it might even be needed to do it per department (in particular cases). But saying everything will be in the public cloud is a bridge to far for many companies…. At the moment.

 

A lot of companies are already doing loads of workloads in the public cloud, without even really understanding it. Microsoft Office 365 (and in particular outlook) is one the examples where a lot of companies use public cloud, sometimes even without really looking into the details and if it is allowed by law. Yes, that’s right going public you need to think of even what can and what can’t be put in the cloud. Some companies are prohibited to by national law to put certain parts of their data in a public cloud, so make sure to look for everything before telling your company or customer to go public.

 

Most companies choose a gentle path towards public cloud, and choose the right workloads to go public. This is the right way to do if you’re an established company with your own way, but than again you need to not only think of your own, but also about the law that your company needs to follow.

 

In my last post on Private Cloud I mentioned the DART framework, as I think it is an important tool to go cloud (private at first, but Public also). In this post on Public Cloud I want to go for the SOAR framework.

 

Security - In a Public Cloud environment it really important to Secure your data. IT should make sure the Public part(s) as well as the Private part(s) are well secured and all data is save. Governance, compliancy and more should be well thought of, and re-thought of every step of the way.

 

Optimization - the IT infrastructure is a key component in a fast changing world. As I already mentioned a lot of companies are looking to do more for less to get more profit. IT should be an enabler for the business, not some sort of firefighters.

 

Automation - is the key to faster deployments. It’s the foundation for continuous delivery and other DevOps practices. Automation enforces consistency across your development, testing and production environments, and ensures you can quickly orchestrate changes throughout your infrastructure: bare metal servers, virtual machines, cloud and container deployments. In the end automation is a key component for optimization

 

Reporting - is a misunderstood IT trade. Again it is tidely connected with Optimization but also automation. For me reporting is only possible with the right monitoring tools. If you want to be able to do the right reporting you need to have a “big brother” in your environment. Getting the rigt reports from public and private is important, and with those reports the company can further finetune the environment.

 

There is so much more to say, but I leave it with this for now. I really look forward on the comments, and I know there is no “right” explanation for private, public, or hybrid cloud but I think we need to help our companies to understand the strenght of cloud. Help them sort out what kind to use and how. We’re here to help them use IT as IT is meant to be, regardless of the name we give it. See you next time, and in the comments!

arjantim

The Private cloud

Posted by arjantim Jul 30, 2016

In a private cloud model, the control of a secure and unique cloud environment to manage your resources is done by your IT department. The difference with public cloud is that the pool of resources is accessible only by you and therefore it makes management much easier and secure.

 

So, if you require a dedicated resource, based on performance, control, security, compliance or any other business aspect, the private cloud solution might just be the right solution for you.

 

More and more organisations are looking for the flexibility and scalability of cloud solutions. But many of these organisations struggle with business and regulatory requirements that keep them from being the right candidate for public or private cloud offerings, they think.

 

It can be that you work within a highly regulated environment that is not suitable for public cloud, and you don't have the internal resources to set up or administer suitable private cloud infrastructure. On the other hand, it might just be that you have specific industry requirements for performance that aren't yet available in the public cloud.

 

In those cases it could just be that the private cloud as an alternative to the use of public cloud, is a great opportunity. A private cloud enables the IT department, as well as the applications itself, to access IT resources as they are required, while the datacentre itself is running in the background. All services and resources used in a private cloud are defined in systems that are only accessible to the user and are secured towards external access. The private cloud offers many of the advantages of the public cloud but at the same time it minimises the risks. Opposed to many public clouds, the criteria for performance and availability in a private cloud can be customised, and compliance to these criteria can be monitored to ensure that they are achieved.

 

As a cloud or enterprise architect a couple of things are very important in the cloud era. You should know your application (stack) and the  way it behaves. By knowing what your application needs, you can determine which parts of the application could be placed where, so private or public. A good way to make sure you know your application is using the DART principle:

 

Discover          -           Show me what is going on

Alert                -           Tell me when it breaks or is going bad

Remediate      -           Fix the problem

Troubleshoot   -           Find the root cause

dart.png

 

If  you run the right tools within your environement, it should be easy to discover what is going on in your environment and where certain bottlenecks are, and how your application is behaving and what the requirements for it are, the step to hybrid is much easier to make, but that is for another post, first I'll dive into public cloud a little further next time.

arjantim

You guys rock!

Posted by arjantim May 26, 2014

The last couple of weeks I've been the storage ambassador for the thwack community. I've really enjoyed sharing my thoughts with you guys and reading all of your comments. I noticed that the thwack community is great and there is a lot of knowledge floating around, that needs to be shared.

 

In the ever changing world of IT, storage is a component that really matters. In a Software Defined world where every cloud does things as a service Storage is one of the keystones. Laying the foundation is done by choosing the materials and plotting the right sizes. I've seen a lot of companies struggle with their infrastructures because they didn't lay the right foundation.

 

It's not really where I wanted to go. As said, I really liked the interaction in the Thwack community. And it made me wonder what you guys think of it, and where you see room for improvement.

 

What are the questions you need answers on?

 

What would you want to hear more about?

 

What resources do you use?

 

There is so much we can help each other with, and I know you guys and girls rock, but let's keep helping each other! I for one are more then willing to answer the questions you have, and I'm eager to hear what you are using, reading and working on?

 

See you in the comments! And keep on rocking!

With a lot of customers running their infrastructure almost a 100% virtualized these days, I see more and more people moving away from array-based replication to application-based (or hypervisor-based) replication, with Zerto ZVM, VMware vSphere replication, Veeam Replication, PHD virtual ReliableDR and Hyper-converged vendors offering their own replication methods the once so mighty array-based replication feature, seems beaten by application-based replication.

 

Last week I went to a customer where Storage-based replication was the only replication type used, but their architects were changing the game in the virtual environment and application-based replication was the road they wanted to ride... Where virtualization administrators and most of the administrators saw a lot potential in application-based replication the storage administrators and a some of the other administrators were more convinced of what storage-based replication offered them for all these years

 

What about you?Do you prefer storage-level replication? or is application/hypervisor level replication the way to go?

 

In IT we're used to having a buzzword every now and then. Most technicians just continue doing what they're good at and maintain and upgrade their infrastructure in a manner only they can, as they now how the company is serviced in the best way possible, with the available funds and resources.

 


As a succesor to the cloud buzzword we now have software defined everything. Software Defined DataCenter (SDDC), Software Defined Networking (SDN), Software Defined Storage (SDS) and so on. And although I don't like buzzwords, there is a bigger meaning behind Software Defined Everything.

 


  When looking at the present DataCenter you can see that the evolution of hardware has been very impressive. CPU, Storage, RAM and Network have evolved to a stage where software seems to have become the bottleneck. That's where Software Defined Everything comes into the picture, making the software that will use the potential of the hardware. My only point is that everything should be "Software" defined is bypassing everything the hardware vendors have done , and will continue doing, which is a tremendous amount of hard work, research and development to give us the Datacenter possibilties we have today. So naming it "software defined" is wrong if you ask me.

 


  Looking at Storage there is a lot of great storage software to leverage the just as great storage hardware. Looking at  some of the Solarwinds Storage Manager features like:

 

 

  • Heterogeneous Storage Management
  • Storage Performance Monitoring
  • Automated Storage Capacity Planning
  • VM to Physical Storage Mapping

You can see hoq software uses the hardware to provide the technician the tools they need to manage their datacenter, and giving the customer the IT they need. In future posts I would like to go deeper in some of these features, but for now I just wanted to share my thoughts on why software is an extension to hardware AND the other way around. You’re more then welcome to disagree, and leave your view in the comments.

 

See you in the comments

During the last few weeks (or even years) there was a lot of news on Flash. Flash as a Cache, All Flash Arrays (AFA) Hybrid Arrays and Flash in memory banks (UltraDIMM). It seems the Start-ups make a lot of rumble on all this new technology, but looking at the announcements made during the first two days during EMCWorld2014 it looks like the big fish are hunting the small ones, and will soon circle around the smaller fish, they’re ready to eat them… or not ;-D

 

Processing the IOPS as close to CPU as possible and with the quickest medium out there seems to be the way to go. But we need a way to leverage this . Which application goes on which medium and when will the medium “Flush” the data to the next Tier? Or will we just rely on caching software and let it decide where the data will be stored?

 

Everybody seems to be using Flash as the Tier behind RAM, but does that mean HDD’s are dead? Or will they function as the capacity tier as long as Flash is much more expensive then traditional storage? And what about the future, like ReRAM or other Flash successors?

 

But in the midst of all this rumble there is a question that needs aswering… How to manage this new storage world and a special question is how to manage the multiple storage arrays a lot of the companies have these days. I’m  writing a VMware design for a company in the Netherlands and they have HP and NetApp storage in their Datacenters, and managing them is not as simple as it can be… What is your secret to manage the datacenter of the future (or the one you’re managing now), or do you already use the perfect tool and are you willing to share?

Filter Blog

By date: By tag: