5 Replies Latest reply on Jun 11, 2018 12:35 PM by mthughes

    Unable to poll ESXi Host directly in VMan 8.1


      Have a problem where polling an ESXi Host through vCenter returns different hardware results to polling directly.

      Able to check this with an old version of Solarwinds that we are migrating away from.

      Looked at the workaround on our new and  yet live system shows that this workaround is not possible.

      Similar to this: Troubleshooting Guide for Hardware Health for ESX host Server Polled Through vCenter - SolarWinds Worldwide, LLC. Help a…

      Which leads you to this:https://support.solarwinds.com/Success_Center/Network_Performance_Monitor_(NPM)/Change_a_host_to_poll_through_vCenter_vs_Poll_ESX_server_directly


      It appears that on this latest version, it is possible to change how a vCenter itself is polled: Basic, VMan Appliance or VMan Orion but it is not possible to select an ESX Host itself and change how that is polled, ie via vCenter or direct. Not possible to select that option at that point, greyed out.


      Does anyone know if this functionality has been removed or is this a bug or issue in install?


      Thank you

        • Re: Unable to poll ESXi Host directly in VMan 8.1

          I definitely have the same issue.  I paid it no attention until one of our groups pointed out that their VCenter was going to report connection errors because they utilize it to connect to on-site hosts that are flung far afield.  For management they typically go to the ESXi host directly.


          For those ones in the VCenter for organization that perhaps lose connection to the VCenter due to latency, I should be able to switch to polling directly with root creds - to my surprise the option appears to be gone - although I note that the Virtualization Polling Settings still lists "polling through" as a column value, despite the GUI not having a way to change it.

          • Re: Unable to poll ESXi Host directly in VMan 8.1



            For your first issue of having different Hardware Health values when polling via vCenter or ESXi directly, I would reset the sensors on the ESXi Host first to see if this fixes the issue. Most of the time when there is a difference in SolarWinds it is because that is what the values are on the Host or vCenter server. You can verify the Hardware health on the ESXi Host most accurately by using PowerCLI or the HTTP MOB interfaces when connecting to the ESXi Host directly.


            This link shows an example for using PowerCLI to poll the non-green Hardware Sensors and to reset those Hardware sensors. It is easier to reset the sensors using the vSphere client or vSphere Web Client though, especially if you are not already using PowerCLI to do VMware management.

            VMware Knowledge Base


            You can also use the HTTP(s) Managed Object Reference (Mob) interface on the ESXi Host to see the current Hardware health as shown in your first URL link.


            The navigation path to the Hardware Sensors is,

            content -> rootFolder -> childEntity -> hostFolder -> childEntity -> host [select appropriate host] -> runtime -> healthSystemRuntime -> systemHealthInfo -> numericSensorInfo


            See this Veeam post for more information on the Mob and trying to fix Hardware Health difference, KB1007: Hardware status differs in VMware vCenter server and Veeam monitoring products

            I have also fixed Hardware Health issues by stopping and starting the sfcb-watchdog service - aka Small Footprint CIM Broker watchdog. You can do this from the vSphere client or the ESXi Host's CLI.



            For the second issue with changing the polling method, the steps are not always clear on the latest products. I used this post to help me solve the issue,

            What's better? Polling ESXi directly or through vCenter


            The important points are

            • There are issues in NPM when you are polling through vCenter and SNMP
            • You need to make the change on the ESXi node to get it to appear under the vCenter server
            • You may need to unmanage the vCenter server that is managing the ESXi Host to fix the issue


            Steps to change polling from ESXi directly to vCenter polling

            1. On the ESXi Host, change the Polling Method to Status Only: ICMP
            2. Unselect the checkbox for Poll for VMware
            3. Save settings and wait for half an hour or you can try to force the change by doing Rediscover and then Poll Now on the vCenter Server node. (the Poll Now is redundant I think since the Rediscover should poll afterwards).
            4. The host should appear under the vCenter Server and when you click on the ESXi node, take you to the ESXi Host summary page. If not then trying to unmanage the vCenter Server for an hour should help.


            Once it is correctly being polled through vCenter server if you click Edit Node on the ESXi node, you will see that the Poll for VMware checkbox is now checked and it states "Polling for this ESX Host is handled by vCenter using existing credentials."

            You will also see the ESXi Host in the Virtualization Polling Settings / VMware page when you expand the vCenter Server it is being managed by, though it will show Polling Method as basic and you will not be able to change the polling method. If you change the Polling Method to SNMP for the ESXi node then it will allow you to switch between the two polling methods in the Virtualization Polling Settings / VMware page.  As stated before, you should not be using SNMP  and polling through the VMware API (via direct ESXi Host or via vCenter server), at least for NPM 12.2 which is when I first saw this information as a warning notice on the Edit Node settings page. They poll the same information and there are issues with how SolarWinds handles the duplicate data.

            1 of 1 people found this helpful
            • Re: Unable to poll ESXi Host directly in VMan 8.1



                I ran into the same issue this last week. I am polling the ESXi Host through the vCenter server directly. I had a RAID drive go down and after being rebuilt SolarWinds NPM still shows the drive as being in a critical state


              The hardware status shows as green when viewed directly on the ESXi Host with vSphere client and when connecting to the vCenter server with both the vSphere Client and vSphere Web Client. SolarWinds NPM shows the drive as still in critical condition.

              Items tried:

              • Reset the System Event Log (SEL)
              • Reset the Sensors / Updated Sensors with vSphere Client and vSphere Web Client
              • Restarted the CIM broker on the ESXi Host via the sfcbd-watchdog


              All items did not fix the issue.


              I switched the polling in SolarWinds from vCenter to polling the ESXi Host directly (still using the VMware API) and the error goes away. When I switched the polling method back to polling the ESXi Host details through the vCenter server the drive error came back. There is some cached error going on in the vCenter server that only shows the drive error when using the VMware API to poll the hardware health via vCenter. The vCenter server stills shows everything as good when connecting to the vCenter Server with the vSphere clients though.


              I experienced the same issues shown in this link from VMware (rare to actually match the experience of a VMware KBA),

              VMware Knowledge Base


              Overall hardware health showed no errors like their example but when I polled the specific storage hardware sensor health I saw the error,


              PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> foreach($esx in Get-VMHost -Name FQDN_OF_HOST){
              >>     $hs = Get-View -Id $esx.ExtensionData.ConfigManager.HealthStatusSystem
              >>     $hs.Runtime.HardwareStatusInfo.StorageStatusInfo |
              >>     where{$_.Status.Label -ne 'Green'} |
              >>     Select @{N='Host';E={$esx.Name}},Name,@{N='Health';E={$_.Status.Label}}
              >> }
              Host              Name                                                       Health
              ----              ----                                                       ------
              FQDN_OF_HOST      Drive 0_5 on controller 5003048010B3C200 Fw: 4B0Q - FAILED Red


              Only refreshing the hardware health status using PowerCLI fixed the issue.

              (Get-View (Get-VMHost -Name esxhostname | Get-View).ConfigManager.HealthStatusSystem).RefreshHealthStatusSystem()


              I unfortunately do not know how to refresh the hardware health cache using the vSphere Client - perhaps rebooting the vCenter server would clear up the issue.


              The issue is also fixed if you switch to polling from the ESXi Host directly but you this will save anyone from having to make that change to dozens of Hosts.