That's unusual behavior---IF you have correctly configured the switch and the snmp strings in NPM.
I suggest you review the two configurations (Edit the device in NPM and also carefully review the switch's snmp configuration for accuracy and typos). If you find NPM is correctly set up to monitor the switch via the right version of snmp, using the correct community strings (or snmp-v3 priv and auth settings), then you may need to dig deeper.
- Do you have a firewall between NPM and the switch that is not allowing the snmp traffic to reach the switch, or not allowing it to return from the switch to NPM?
- Do you have ACLs that allow or restrict access to the switch via the snmp string NPM is using to talk to the switch?
- If you have snmp-v3, there are additional restrictions that may be implemented by TACACS or other servers; please ensure they are correctly configured or removed.
When all else fails, and if the switch is really powered up and has physical link and logical path to the NPM poller, start with simplicity.
- Verify L1, L2, and L3 in that order. The poller must be able to ping the switch successfully.
- Verify snmp community strings. Start with just a temporary / trial snmp-v1 or v2 setting before trying to use v3.
- When snmp v1-2 are working properly, then implement snmp-v3 if you're familiar and comfortable with all of its options.
Let us know the cause of this inaccurate reporting (since I don't think it's a hot fix at fault).
To be honest with you this was flagged by our network admin... and im pretty sure in the past a down node, would show errors under the hardware status.. unable to poll
the device has been in solarwinds prior to the upgrade and it is currently down... all ports / fw/ etc have been configured long time ago and are working fine between it and npm..
come to think of it , we have another cisco device is down, and its hardware status shows as up
snmp test through the edit resources fails on both, so definitely both devices are down... these are old devices in the NPM system