It should pick it up automatically but have you checked that the grayed out node that is being monitored has ticked poll for VM and is using ESX credentials? it might have got confused due to it being already polled through say WMI and not correctly tied the fact its part of the vCentre.
Stranger things have happened with our Solarwinds instance
There are guests on 'x' host that are being managed through WMI. All of the VMhosts are being polled through vCenter.
This is only happening for servers that were already in Solarwinds before I setup the Virtualization Manager tree to poll through vCenter.
The weird part is we have servers that were already managed by Solarwinds that went and found their spot in the tree. Its just certain nodes.
In a few cases I have seen the synchronization between the the orion.nodes tables and the orion.vim.virtual machines get screwed up, and cause effects like this. The obvious clue would usually be when the nodeid column is null for the vm, so the IVIM side of orion says this vm has no related nodeid in the db, we should have you add it. Then when you click to add it the add nodes interface is working strictly from the orion.nodes tables where it does find a node with the matching IP address already in the list. It would be useful if there was a way to just manually map a nodeid to a VM in cases where you noticed a problem.
In the past when I ran into this support had me break the VMAN integration, get everything cleared from the vim tables (vcenters and hosts and vm's and everything), then add the vcenter back in and wait for it to populate all the child hosts and vm's from there. I think that things get especially messy in cases where you polled the esx hosts directly and then later switch the polling to be from the vcenter.
I was able to resolve this with the assistance of support.
You have to delete the node.
Then go into Virtualization tab
Expand the tree and find the node. I left the box 'poll for vmware' checked not sure if it mattered.
Node was re-added that way. It showed up immediately.
The issue was related to Solarwinds essentially the node was being written to 2 different database tables. It sounded like support knew about it & were going to address it with a new release.