14 Replies Latest reply on Jun 1, 2010 12:14 PM by wdouglas

    collection statistics


      Hi Everyone,

      We are faily new to Orion and are wondering what would be aggressive in regards to polling intervals for nodes / interfaces / volumes?

      Also what would be aggressive statistic collection for nodes / interfaces / volumes?

      Basically we are just trying to get a feel for what the majority of people in the forum have their performance settings configured as.




        • Re: collection statistics

          I use the default of 10 minutes on most of our non-critical interfaces. Critical interfaces on core switches and routers I set to 2-5 minutes.

          I wish there was a way to make bulk interface statistic polling changes rather than having to change each individual interface. I have found that you can go into the Custom Property Editor and go into settings and enable "Statcollection". Then you can get a nice view of what all your interfaces are set to poll at.

          • Re: collection statistics


            We use 300/300/300 (seconds) for polling nodes/interfaces/volumes.

            We then use 5/5/15 (minutes) for statistics collection on nodes/interfaces/volumes.

            We do this to remain 95th percentile compliant of 5 minute values.


            This has worked to great benefit for us, and has remained unchanged for a year.

              • Re: collection statistics

                thank you, all your posts have been most useful. Keep em coming!!

                  • Re: collection statistics

                    Has anyone noticed any slowness or performance issues after lowering the statics collection to 5 minutes or less.  I'd like to change my setting from the default (10 minutes), but do not want to sacrifice alot of performance from it polling devices more often.

                      • Re: collection statistics

                        After making your change to 5 minutes, I would keep an eye on your polling completion %, if it drops below 99% you are going to see gaps in collection data.

                        I like to have "Polling Engines Status" on my main page so I can keep a close eye on how its doing.

                        Also here is a nice little query you can run to mass update interfaces. It may need some tweeking to fit your environment. run it with the Orion services stopped.

                        Select a.sysname,b.interfacename,a.ctscontact,b.StatCollection from nodes a, interfaces b where a.nodeid = b.nodeid and a.contact = '        '

                        Hope this is helpful

                          • Re: collection statistics
                            Stacy Patten

                            For use we had to breakdown by device type 2\5\10\15 for statistics and polling for critical, high, medium and low devices.  Fortunately we can manage this by name and have a set of batch jobs we can run which will re-apply the settings if somebody adds a new device and forgets to change the timing.

                            We had to turn our polling way down and I'm sure that most of this is because our polling process is slow.  Our Orion server is in Europe but 1/2 of our nodes are in the Americas and APAC so the delay hurts quite a bit.

                    • Re: collection statistics

                      Hi folks,

                      I've just been looking at statistics collection as I have a requirement to generate some graphs regarding link utilization.

                      I had my nodes, volumes and interfaces all set to 5 minutes but was only achieving 98.37 poll success.

                      I appreciate the above comments say that we will see some gaps but I'm seeing huge gaps, instead of every 5 minutes we only seem to be recording data every 20-30 minutes.  I've checked a few interfaces and they each seem to be doing the same. 

                      We do have a fair load on the server - 9500 elements although this was over 12,000 a few weeks ago.  The server is overspecced compared to what SolarWinds recommend however.

                      Does anybody else have this occuring?  Any suggestions on confirming whether this is caused by the load?  Maybe unmanage half the nodes and monitor?

                      Any help appreciated.