0 Replies Latest reply on Dec 3, 2012 3:31 PM by jorekk

    Trying to get an idea of how much data is passing through interfaces


      So I have been tasked with getting a report together for an executive in order to charge for bandwidth utilization from folks who lease some extra building space/network ports.  Broken down to better terms, they want to charge a customer for moving X amount of data through our network/devices.  28 port Cisco 3750 with 1 port designated as the "up-link" and all the others go to whatever the customer connects it to.


      I have the switch monitored in NPM along with all the interfaces (each interface represents a different customer).  I have also created a report in writer that has a sum of Total Bytes Received on the interfaces (I also did Transmitted and Recv+Xmit for giggles), but in theory the amount of bytes received on a customer interface (i.e. data sent by whatever equipment the customer has on the other end) should equal the amount transmitted on the up-link port of that switch to the rest of the network, correct?


      The problem is, that isn't the case, by terabytes of data.  As you can imagine, this leads to a loss of confidence in the reporting/monitoring and I'm at a loss for explaining why this is the case.  With each interface representing a different customer the argument is that no data should be going between switch-ports and should be representative of data on it's way out of the respective gateway to wherever the customer sent it.


      So this leads me to my question: How is NPM getting the amount of data for the Bytes Received/Transmitted in its polling? What MIB in particular and how is it calculated?


      Any help/insight is greatly appreciated.