1 Reply Latest reply on Sep 13, 2017 10:58 AM by xtraspecialj

    F5 Virpion Chassis and Blade Monitoring

    xtraspecialj

      My knowledge of F5's is practically null, so I'm hoping people smarter than I am about this subject can help me out here.

       

      We have a F5 Viprion Chassis to monitor with 3 blades in it.  Those 3 blades have 5 Virtual devices spread across them.  What we're trying to do is find a way to properly monitor the chassis itself.  We were given a spreadsheet that has the out of band management IP's for each physical blade and a "master" IP for the blade "cluster".

       

      According to the only post on Thwack I could find about Viprion's (here), each blade in a Viprion has a unique IP Address and the cluster has a management IP Address that points to whichever blade is currently designated as the cluster manager.  So, treating it like a typical cluster to be monitored in Orion, we added the cluster manager IP and chose to monitor everything found in List Resources, then, for each blade's IP we only chose to monitor Hardware Health, F5, CPU & Memory, all of the volumes, and just the Interfaces with mgmt in their name.  The problem we have is that each of the blades' IP's come up with the same exact name (i.e. Caption) and the same Physical MAC address under the Node Details section.  Plus, only the cluster manager node and the blade 1 node have working hardware health.  The rest of them come back as Hardware Health Unknown.  Furthermore, a lot of the volumes have the same name and exact same percent used across all of the blades with the exception of a couple of them.  Basically, I'm just really confused and probably not understanding how these

       

      So my questions are:

       

      - Why are the blade's all coming back with the same MAC Address, same system name, mostly the same volumes, and mostly the same interfaces?

       

      - Why are hardware health sensors coming up as being available on all of the blades in List Resources but only working on the cluster manager IP and blade 1 (which I assume is currently designated as the manager) and showing as  Unknown on blades 2 and 3?

       

      - Is there any way that Orion natively associates the blades to the chassis they live in, or do we have to do this manually with custom properties or something?    Even cooler would be if it could tell us what blade a particular virtual device is living on, like the way the vSphere monitoring ties the VM's to the Hosts.