4 Replies Latest reply on Aug 7, 2013 12:04 PM by ET

    Job Engine v2 Jobs Queued in Performance Monitor

    mike1897

      JobEnginev2.JPG

      In Job Engine v2 Jobs Queued in Perfomance Monitor, I am seeing over 2 billion Jobs Queued.  I am not seeing this on any of our other pollers that we have in use.  Is this something that I should be worried about and if so, how would I fix this?

       

      We are using:   Orion Core 2012.1.1, NPM 10.3.1, UDT 2.5.1, IVIM 1.3.0, VNQM 4.0

        • Re: Job Engine v2 Jobs Queued in Performance Monitor
          solarwooky

          I have the same problem my number sticks at 2,147,483,648. I'm gonna open a case for this since Solarwinds has not responded.

          • Re: Job Engine v2 Jobs Queued in Performance Monitor
            holmjones

            We're seeing the same thing.  Restarting the Job Engine v2 service will cause it to clear out in about five minutes, but it will happen again in hours or days.  There doesn't seem to be a specific event that triggers it.  It'll go from 0 to 2.147 in a single polling period.

             

            Some threads point to a version of the template missing a few "count statistic as difference" checkboxes.  Ours is up to date, so that isn't the issue here.  The database is healthy, and neither it nor the polling engine are overloaded.

             

            I'm considering opening a case, too.  For the moment I've written a script to restart the service whenever this happens, but it's happening more frequently as time goes by.

             

            .

            • Re: Job Engine v2 Jobs Queued in Performance Monitor
              ET

              Hi,

              this weird value actually doesn't indicate any issue to your system. We found out that there's possibility of synchronization issue on JobEngine Service start up between initial reset counter value and first jobs incrementation on this counter. In case reset counter value is slow, we can miss first counter incrementation, but we still decrement counter after job is dequeued. So under specific conditions the workflow is following - increment counter, reset counter, decrement counter. And because counter can't have negative value, we have this weird value which is actually value overflow. eq: -1 = 2,147,483,647, so in your attached screenshot there was 2 job queued before counter reset occurred.

               

              Workaroundto determine real value for queued jobs is to combine value from 2 performance counters:

              queue items = "SolarWinds: JobScheduler v2\Jobs Running" - "SolarWinds: JobEngine v2\Jobs Running"

               

              Thanks

              ET;