This discussion has been locked. The information referenced herein may be inaccurate due to age, software updates, or external references.
You can no longer post new replies to this discussion. If you have a similar question you can start a new discussion in this forum.

190K elements to monitor

Hello,

Phase one of elements to monitor is 190K  and second phase is another 190K

I will looking at the architecture now. From what I see, I will need 3 main Orion servers with NPM, NCM.  etc.

Orion 1  will have 10 pollers with their 10 HAs ,

Orion 2   will also have 10 pollers with their 10 HAs

Orion 3 also will have 10 pollers with their 10 HAs

I will need EOC for this to be able to view the 3 environments.

Now, here  are my questions:

Is it feasible that way ?  I know that solarwinds has some limits....

Can solarwinds monitor over all these almost 400K elements?

Let me know if it has been tested and the right way to do this.

Thanks,

  • You can monitor up to 12k Elements with One Single poller before you start to see performance issues. However there are people polling up to 18K or more elements without problems, it depends on your environment, server resources etc. As long as your Polling rate doesn't go over 85% you should be good.

    I would recommend taking the Orion scaling and tuning class or schedule a call with a Tech representative.

  • I've worked some pretty large Solarwinds deployments and though you should be prepared that if you are trying to operate at the scale you will need to become a Solarwinds expert very quickly, or get some on staff.  Things will get complicated and administrating 3 instances can easily be the kind of thing that ends up taking several FTE's if they are not experts that can use the API/SQL to synchronize things and make good use of all that data.

    I'd also suggest you go ahead and start planning an external data warehouse for your historical metrics.  The big choke points at that scale will likely be the databases growing super large because of all the historical data and big databases tend to slow down the application.  Would probably be a good idea to have the web consoles only retain a relatively recent history of whats going on, and then all longer term reporting and analytics be done with the warehouse.  Trying to get a report within the console of something like 30 days of avg interface bandwidth utilization across 200k interfaces will be a serious struggle to keep it from timing out, just easier to start planning for that now.

  • Thank you for the replies :-)

    Can you tell me how do we do the calculations to know the number of the CPUs needed for this environment ?

    It will be a physical server for  Orion and for SQL.

    Any hints ?

    Thanks again

  • I try not to make any estimates of cpu loads would actually look like, they are SUPER variable depending on the type of polling you do, how often and a bunch of other things that come into play.  If you are provisioning a real physical server then I would probably shoot for something with 16+ cpu for something with hundreds of thousands of elements.  Never seen a solarwinds server max out more than that unless something was broken but its a huge pain to decide 2 years from now that there is a new feature that uses more cpu and be bottlenecked.

    Similar on the SQL side, faster cpu's are more helpful than running more cores for solarwinds purposes.