Solarwinds.Orion.LogMgmt.Syslog causing high memory in one of pollar other pollar are working fine

Solarwinds.Orion.LogMgmt.Syslog causing high memory in one of pollar other pollar are working fine can you please share the resolution i have searched alot but nothing have been found yet.

Parents
  • Hello,

    We experienced this and ultimately our resolution was to migrate the Log Analyzer off the shared DB server where the Orion DB resided.  In our environment, we have 5 load balanced web servers behind an f5 and 5 polling engines.  Over time, we would see the Orion.LogMgmt.Syslog service climb to 98-99% of all RAM.  (That is an impressive leak on a server with 64gb)  Depending on the polling engine this was happening on, the interface would be extremely slow or hang fully.  Killing the process and/or rebooting the server would clean it up.

    It took over 60 days to get some resolution.  During the support case, the engineer let us know that the product could handle about 1k events/sec.  So for the log amount we were sending, moving to a separate DB helped greatly.

    A downside has popped up for us though.  This week we noticed a polling engine having the same issue.  Despite us following the process to move the DB by invoking the Configuration Wizard on all hosts, somehow the Primary Polling engine reverted to the old DB name and server.  This really filled up the event log on the server, and again we had 98% RAM.  We have disabled the service and are working on a new case to figure that one out.

Reply
  • Hello,

    We experienced this and ultimately our resolution was to migrate the Log Analyzer off the shared DB server where the Orion DB resided.  In our environment, we have 5 load balanced web servers behind an f5 and 5 polling engines.  Over time, we would see the Orion.LogMgmt.Syslog service climb to 98-99% of all RAM.  (That is an impressive leak on a server with 64gb)  Depending on the polling engine this was happening on, the interface would be extremely slow or hang fully.  Killing the process and/or rebooting the server would clean it up.

    It took over 60 days to get some resolution.  During the support case, the engineer let us know that the product could handle about 1k events/sec.  So for the log amount we were sending, moving to a separate DB helped greatly.

    A downside has popped up for us though.  This week we noticed a polling engine having the same issue.  Despite us following the process to move the DB by invoking the Configuration Wizard on all hosts, somehow the Primary Polling engine reverted to the old DB name and server.  This really filled up the event log on the server, and again we had 98% RAM.  We have disabled the service and are working on a new case to figure that one out.

Children
No Data