4 Replies Latest reply on Feb 23, 2010 9:25 AM by ecklerwr1

    Sum multiple interfaces.

    AndrewShine

      I have some ESX servers that each have dual network links to an Equallogic ISCSI SAN. I can easily get orion to collect and monitor each link individually. However I'd like to get graphs etc for the summed throughput rate\ volume \ errors etc (basically click on a link in a map and see the normal interface page but the numbers and graphs represent a sum of the underlying 2 links)

      Note: I've seen a couple of post about doing this as a report using a custom property but thats not what I'm looking for.

      Any suggestions?

      Thanks

      Andrew

        • Re: Sum multiple interfaces.
          scottshipley

          I have also requested this, but to sum multiple interfaces across multiple devices to create a "logical" object to represent different types of traffic or sum of multiple UDP collectors such as "total VPN connections" across multiple devices.

          • Re: Sum multiple interfaces.
            ecklerwr1

            Depends what kind of NIC'S/drivers you have.  I have a situation like that but monitor both through a virtual adapter called: BASP Virtual Adapter - instead of each individual interface.  As far a SW's way to group them... I don't think there's an easy way... if someone has a great solution to this I too would love to hear it.

              • Re: Sum multiple interfaces.
                AndrewShine

                I do have Broadcom nics however I think the BASP Virtual Adapter only applies to Windows platforms. I'm playing with ESX :-(

                Looks like this is a feature request for the next version of NPM..

                  • Re: Sum multiple interfaces.
                    ecklerwr1

                    I wonder if in the following kind of scenario you could then monitor the "bond0"

                     

                     

                     

                    4.1.4 Configuration 4:

                    In this configuration, NIC 0 is shared between the service console and the VMkernel. NIC 1 is dedicated to VMkernel. NIC 0 and NIC 1 are teamed by the VMkernel to provide a bond. This bonding provides for NIC redundancy and load balancing.

                    The steps to enable this configuration are:

                    4. Log in with root level privileges to the service console and execute the command

                     

                     

                    vmkpcidivy –i

                    . This is an interactive command to allocate devices between the service console and virtual machines. Configure NIC 0 to be shared between service console and Virtual Machines.

                     

                    5. Create a NIC team by adding the following lines to the end of the file

                     

                     

                    /etc/vmware/hwconfig

                    :

                    nicteam.vmnic0.team = "bond0"

                    nicteam.vmnic1.team = "bond0"