2 of 2 people found this helpful
We are moving our main data center geographically 300 miles away. I had opened a support case regarding this and was told there were no problems with this since the application is designed to be distributed. Their example was an office in New York could have a poller, San Francisco could have a poller, and the main app and SQL server could be in Austin. I also had a phone call with their sales team/support team prior to upgrading to an additional poller. Then I find this on their support site.
Says it is not recommended due to latency. We are moving forward since network monitoring is a sliver of this data center move. I expect there shouldn't be problems as our connection is 40gig with an expected round trip of 30ms. It should be interesting to see how this works. We poll around 2000 network nodes, 2000 Access Points, and monitor around 50K UDT ports. I wonder now that I'm typing this, how some of those slower connections I already have will respond on the NCM inventory/config backups.
theG0at thanks for responding. Atleast this would give me a starting point and comparative based on the information you have shared. I have just looked into the RTT from our London office to AWS Oregon which is around 150 ms where we have majority of our services hosted, however it shouldn't be that bad from our US offices.
We certainly don't have that many network nodes to monitor, around 200 network nodes, 200 APs and 1500 interfaces. Now the question is if we install our Orion and SQL server in the same AWS region then what would be the acceptable latency?
3 of 3 people found this helpful
Acceptable latency from the Solarwinds Server to the endpoint monitored devices is solely up to your discretion, but as long as it is less than 1500ms, you do not need to change any polling timeout settings. Customers concerned about latency delays will change the timeouts, or deploy Additional Polling Engines to give a better idea about latency to the regions to get useful latency information.
3 main reasons the Additional Polling Servers are used:
Scenario 1: Distribute Load between Solarwinds Servers
Scenario 2: Localize Load to Locations, usually to Major locations serving minor locations within the region.
Scenario 3: Improve Response time reporting to localized locations, usually Polling Engines deployed to every location
I'm very interested in your success or issues that you have found. We are pushing for the same setup, but also being very mindful of cost.
In addition, I wanted to setup the system to be much more server tolerant, i,e. RDS or clusters, with a web balanced front end.
You are welcome to engage me off of the blog if more comfortable with that.
It would be great to get some more details about installing NPM/SAM in the cloud (specifically AWS). We currently have it running on physical servers. Any insights?