This discussion has been locked. The information referenced herein may be inaccurate due to age, software updates, or external references.
You can no longer post new replies to this discussion. If you have a similar question you can start a new discussion in this forum.

What is the max. number of pollers writing to a single DB?

I have multiple standalone Orion installations which I would like to consolidate.

I thought EOC would do the trick originally, but it doesn't work for our organization.

I find it really impacts the performance of the web pages on the other Orion servers when it is pulling down the stats from them every 5 minutes.

I ended up disabling the EOC polling to restore the usability of the other Orion web servers.

When you are doing real-time troubleshooting, it is painful to wait for a minute or more for a page to load.

I am refreshing 3 of my Orion servers this year - 1 Web/poller, 1 poller and a dedicated DB server.

I would like to build a large Windows DB server with 64Gb Ram & 8 disk raid 1+0 array which would be used for all Orion pollers.

Of course I would have to change the licensing to from web/poller to secondary poller only on some servers.

So the question becomes, how much data can I send to a single DB server and maintain good performance with no gaps in data?

I would like to be able to poll 3000+ nodes with 50k+ interfaces with at least 4 or 5 pollers.

I'm wondering what is the largest single Orion installation out there?

  • We currently have:

    6 virtualized pollers (1 primary, 5 additional) split between two locations

         primary is in the same location with the database server

    separate (physical) database connected to SAN

    separate additional web server

    In this environment we're monitoring 8200 nodes which have 32,000 disks and 8400 interfaces for a grand total of about 49,000 elements.

    We're also running SAM, and currently have about 25,000 SAM components deployed.

    We're in the midst of purchasing 2 more additional pollers, but with that said we expect to grow by another 2-3,000 nodes, and a whole mess of disks now that SAM 5.5 supports volume mount points.

    Granted, we have a few cards up our sleeve: The database server has 128Gb RAM and our SAN is tricked out to the point where we could hold the entire db in RAM if we wanted.

    But all of that said, I think what you are planning is not outside the realm of reality, and certainly not the largest installation SolarWinds is running in. (nor are we. Not by a long shot).

  • Art,

    It's typically more about the number of elements per Orion instance.  We have customers that run ~100,000 elements per environment with no performance issues.  The environment you've described is very similar in size to the one I managed at WKU.  I'd be happy to discuss that setup if you're interested.

    Jeff

  • Thanks for the replies so far.

    It's certainly easier to right size the servers and DB when one has specs for similar installations.

  • here is what SW reports:

    Network Elements 23,004

    Nodes 5,709

    Interfaces 17,295

    Volumes 0

    Events 87,676

    Pollers 337,936

    Polling Engines 5

    We have 16 spindles on our database server holding the database split across two raid sets (one for Data, one for Log), and 96GB of RAM. SQLserver is consuming about 64GB as I write this email. There's little contention between pollers since they are (by definition) writing data for different sets of nodes into the database.

    The biggest thing it's missing from my stats are the load from UDT, NTA, syslog, and trap data. These represent quite a large flow of data into my system and I can't see where they are quantified anywhere.

    I think you might need more than 8 spindles in a 0+1 array. One needs to looking at technologies that support a high transaction rate against the database, and more spindles is one way.