I'm trying to reconcile what IPMonitor shows me (vis-a-vis bandwidth) versus what I know to be true across measurable interfaces.
I took a standard Gigabit switch, connected a workstation to it, started a high throughput copy. Client OS shows copy rate of about 70 MB/sec. (Megabytes/sec.) Check the switch web console, as it's a managed switch, and same thing. Check the network core the switch is plugged into via fiber backbone, same thing. So, I know about 70MB/sec is moving through this pipe in one direction.
So, off to IPMonitor which is grabbing interface statistics off the switch presumably via SNMP. For greater granularity, I set all measurement intervals to 60 seconds in the Timing section of the monitor for the relevant port on the switch. Sample size is 60 seconds for "Analysis of Test Results." Result? IPMonitor through its various graphs etc. shows about 10-15MB/sec for this interface. This remains true across a 10+ minute copy.
So, it would be helpful to understand how IPMonitor actually measures/calculates bandwidth, because my installation either is getting flat-out wrong data via SNMP, or it is doing the calculation incorrectly. I am concerned because one of the goals of the tool is to identify potential throughput bottlenecks in our network.