2 Replies Latest reply on Apr 9, 2013 1:51 PM by bsciencefiction.tv

    The ever-expanding data center and management monitoring

    otherscottlowe

      Today, vSphere continues to rule the roost when it comes to virtualization.  However, upstarts such as Hyper-V 3 have the potential to disrupt this space and, in some instances, give organizations a reason to move to a multi-hypervisor model.  In fact, many of today’s most powerful virtualization monitoring solutions, including SolarWinds Virtualization Manager, provide deep support for both vSphere and Hyper-V, from the host level right down to the spindle.  These tools provide administrators with critical at-a-glance metrics necessary to maintain expected levels of performance in the IT environment.  These tools also often provide capacity planning projections that can assist decision-makers in planning the long-term needs of the environment.  Of course, such tools continue to provide administrators with fault monitoring so that quick action can be taken to restore a service if it goes down.

      For many organizations, the ability to take a multihypervisor approach to monitoring is extremely important and provides one step in potentially abstracting the hypervisor choice a bit and enabling more selection in that space.  However, many organizations are also adding cloud-based services to their IT portfolio.  These services will not operate in the traditional virtualized sense and, because they’re running in someone else’s data center, there may be completely different sets of management metrics that come with such services.  Or, at the very least, an administrator’s reaction to management metrics may be different in a clod environment.

      With CIOs considering the cloud for different kinds of workloads, monitoring needs may change, depending on where and how those workloads are operated.  In a pure hosting (non cloud) environment, traditional monitoring might make the most sense as these kinds of workloads are still just traditional physical or virtual servers running in someone else’s data center.  Even though someone else may manage the workload, the customer may want deep performance metrics to ensure workload performance and to be able to hold a provider accountable to Service Level Agreement terms.

      In the cloud, capacity metrics may not have as much meaning since scalability becomes less of an issue, but ongoing performance metrics are still important items to consider.  In addition, availability metrics remain important, regardless of where a workload is running.

      In your opinion, as workloads shift to the cloud, what items do IT departments need to closely monitor and what tools should they use to accomplish their monitoring goals?




      Reply to this post to get 50 thwack points and an entry in the March Ambassador Engagement contest. An iPod Nano sits in the balance!

        • Re: The ever-expanding data center and management monitoring
          byrona

          I think that the important metrics that need to be monitored are going to vary from once place to the next depending on their unique requirements and use cases.  Generally speaking one could assume that the following things would be important; system health (CPU, Memory, volume space), application health (specific will be application dependent), and system/application availability.

           

          <Warning: Gratuitous Company Service Plug>

          As a service provider we offer monitoring for all of these things as part of our managed solution and provide the customer with access to all of the data so that they don't need to provide their on solution.  If customers want to have their own monitoring system we will work to provide them the necessary level of access for that to work also.  We also provide more custom monitoring options including SIEM solutions all paired with our 24x7x365 NOC for customers requiring more specific solutions.

          </ Warning: Gratuitous Company Service Plug>

           

          Generally speaking, the tools used are really a matter of preference so long as they meet the necessary technical requirements.  We use SolarWinds tools as our general monitoring aggregation tool-set; however, we also use the vendor provided tools internally as well.

          • Re: The ever-expanding data center and management monitoring
            bsciencefiction.tv

            I think the issue is not only what to monitor, but also who you trust to monitor.  For example in our organization, our team monitors the cloud servers via SAM, but the cloud team also self monitors with VCOP's.  So whose data do you trust.  Which one takes priority?  A call from the NOC with a SAM alarm or silence from the VCOP's monitoring itself.

             

            There are a lot of questions Cloud computing will create and it is as important to trust your tools as the data they deliver, because if you do not trust the tool, then cries of false alert are soon to follow.

             

            But to the original question, CPU and Memory monitoring speak to the allocation of resources as well as volume monitoring.  You will always need you various application, services, process monitoring, regardless of platform.