1 of 1 people found this helpful
Good question, from a network load point of view we recommend having each polling engine monitor the nodes closest to it in physical/logical terms to cut down on network traffic.
In practice customers consider a few other criteria. Time is actually the big one. While moving nodes between polling engines is easy there is an associated need to update ACLs/firewalls, SNMP clients and flow configurations to insure devices are accessible from the additional polling engine. I see you have NCM so for your switches/routers this could be as easy as running scripts in bulk against them.
An interesting setting for your modules is the SAM Polling Engine Mode (found in Settings->SAM Settings-> Data and Database Settings). http://www.solarwinds.com/NetPerfMon/SolarWinds/OrionAPMAGPollingEngineMode.htm has lots of information on the settings in this page but the ability to keep all SAM monitoring on the Primary poller can be advantageous (you know that you only have to worry about WMI/ssh/ODBC/etc connectivity from one server). Keeping all your SAM polling on the Primary poller while trying to keep everything scalable isn't going to work in the long run but it can help in the short/medium term to cut down on the associated work with the initial balancing.
A few of the strategies I've seen used effectively are as you mention 40/60, 30/70 but also 80/20 where an initial 20% are moved off the Primary to reduce the load and all new nodes are added to the secondary. This can simplify the process for adding nodes for large teams as well as cut down on the reconfiguration of ACLs and SNMP clients.
The real trick to it is to actually monitor your changes and see how they impact performance. Monitoring the effect of your changes will give you feedback on how successful the deployment of has been.
We have a webcast with some information that might help more, http://www.solarwinds.com/resources/webcasts/customer-training-orion-network-performance-monitor-npm-level-3.html. The webcast is very good in that it breaks down the server specs, expected load, etc.
All the best