I was at Loop1 training in Austin, where I enabled the 'Orion Server' template and shared the following data with my Orion instructor whom said I should open a ticket on it, as it could point to having to corruption in the work processes/engine?!? and requires uninstall/reinstall of those specifically perhaps to fix? I wanted to see if anyone else has run into this similarly, and how they resolved it to get some parallel feedback.
I was thinking it could be also related to performance of our SQL instance, but our instructor indicated likely local to the poller. Any insights or suggestions based on others experience appreciated.
This is one of two NPM pollers I'm actually cross monitoring each other using this v.s. self-monitoring, as I've got a separate ticket opened on an issue where this polling engine stopped working, but services were up so the self-monitored 'Polling Engine' alert that self-monitors the Keep Alive doesn't work, no alert sent.
The alert I set up for this 'Orion Server' template must sustain for at least 30 mins, so I know there are other abnormally high measures below, other than specifically what I'm asking about that are resetting before 30 mins, so no alerts on those, but are also higher than recommended based on component descriptions.

Here's more on Jobs queued which, from template should always be zero, but you can see averages since I starting polling last week are consistently high.

2.) Below, here's Job engine v2 worker processes, where a value of 10 or lower is acceptable, also running high.

3.)Last but not least file count monitor JET FILES, also regularly high.
