cancel
Showing results for 
Search instead for 
Did you mean: 
Create Post

virtualization of Orion NPM with up to 5 polling engines and an HA and finally the SQL cluster

I work for a fortune 100 company that is on a virtualization path of everything and anything, I have suffered thrus this twice with Orion and Ciscoworks just to have them moved back to hardware due to the resource requirements. In my case I am being forced to virtualize multiple Orion clusters with each cluster consiting of 1 Orion SLX NPM up to 5 SLX polling engines, 1 HA and finally the SQL cluster. I have a huge monitoring enviroment with the 40+ customer pollers on 4-5k nodes so any reduction in redsources could result in missed customer SLA's.

My question is, has any one had any experiences with large Orion enviroments including the SQL resource ported over to Virtual machines that they can share?

0 Kudos
6 Replies

Don't ask me why but when i posted the thread it populated my profile information with very old Orion version data, below is my current enviroment:

  • Multiple Orion Clusters support vast 7x24x365 enviroments
  • 1 - Orion SLX NPM  9.5-SP4 server 8 GIG RAM, 8 Zeon CPU
    • 1 x  Orion SLX POLLING ENGINE 9.5-SP4, 4 GIGS RAM, QUAD Zeon CPU
    • 1 - X64 SQL 2005 Cluster, 2 Servers each = 16 GIG RAM, 8 Zeon CPU
  • 1 - Orion SLX NPM  9.5-SP4 server 8 GIG RAM, 8 Zeon CPU
    • 4 x Orion SLX POLLING ENGINE 9.5-SP4, 4 GIGS RAM, QUAD Zeon CPU
    • 1 - Orion HA ENGINE 9.5-SP4   9.5-SP4, 4 GIGS RAM, QUAD Zeon CPU 
    • X64 SQL 2005 Cluster, 2 Servers each = 16 GIG RAM, 8 Zeon CPU
  • 1 - Orion SLX NPM  9.5-SP4 server 8 GIG RAM, 8 Zeon CPU 
    • 4 x Orion SLX POLLING ENGINE 9.5-SP4, 4 GIGS RAM, QUAD Zeon CPU
    • 1 - Orion HA instance   
    • X64 SQL 2005 Cluster, 2 Servers each = 16 GIG RAM, 8 Zeon CPU
  • 1 - Orion SLX NPM  9.5-SP4 server 8 GIG RAM, 8 Zeon CPU
    • 4 x Orion SLX POLLING ENGINE 9.5-SP4, 4 GIGS RAM, QUAD Zeon CPU
    • 1 - Orion HA instance   
    • X64 SQL 2005 Cluster, 2 Servers each = 16 GIG RAM, 8 Zeon CPU
  • 1 - Orion SLX NPM  9.5-SP4 server 8 GIG RAM, 8 Zeon CPU
    • 1- x Orion SLX POLLING ENGINE 9.5-SP4, 4 GIGS RAM, QUAD Zeon CPU
    • 1 - Orion HA instance   
    • X64 SQL 2005 Cluster, 2 Servers each = 16 GIG RAM, 8 Zeon CPU
  • 1 -  Orion SL2000 NPM  instance
    • X64 SQL 2005 Cluster
  • 1 -Orion SL2000 NPM  instance
    • X64 SQL 2005 Cluster, 2 Servers each = 16 GIG RAM, 8 Zeon CPU
  • 0 Kudos

    I do not run any virtual servers but did read quite a bit on it here and in the Admin guide.  I think Congo hit it right on the head.  If it were me, I would press my boss(es) to keep the SQL installations on standalone servers and move to virtualize everything else.

    0 Kudos

    can i ask what your shared storage hardware is for your SQL clusters currently?  i.e. model, number of disks, raid configuration, model of disks?

    I have all of my 5 pollers on an ESX infrastrcuture, but a physical SQL server.  The Vmware guys claim that vSphere should be nearly as good as physical boxes these days, especialyl if you are going to tuse a shared storage for the SQL (i.e. in a cluster) anyway...  I would suspect if you had a nice big LUN with lots of spindles and used direct-path I/O you would be in a good place, but I would be nervous without tons of load testing....

    0 Kudos

    Hi tom--

    I know that DonaldFrancis and warbird both run NPM in large environments. I'll email them and see if they can chime in and help.

    M

    0 Kudos

    This is a current thread where jtimes explains his massive install. It does not appear that any of it is virtualzed, however, it is similar in size to yours.

    From reading your thread, and comparing it to my own personal experiences, I can say that you should be able to combine a lot of that hardware into a nice ESX cluster but keep the hardware SQL servers. I have found that the pollers are not very hardware intensive. My single virtual poller has only dedicated 3.0ghz and 2gb of ram, and runs like a champ. It polls 7076 elements currently.

    0 Kudos

    Awesome, Congo!

    Thanks for chiming in.

    M

    0 Kudos