7 Replies Latest reply on Jan 5, 2012 7:57 AM by Garrett Gross

    How to tweak STM Agents

    Deltona

      Hi guys,

      There's a tweak to increase the memory cap on an agent but no further details on what the changes to are included/can be found.
      I'm trying to figure out what the increase in memory cap does specifically. Does it simply raise the bar on how much memory space each handle has in general or is it relevant to a specific module running on the agent like say collector only.

      Furthermore, i'd greatly appreciate if you could tell me what else could be done to an agent to have it flex its muscles without constraints.

      SolarWinds.Storage.Agent.ini

      EXT_ARGS=-XX:MaxPermSize=256M -Xrs -Xms67108864 -Xmx536870912 -Djsnmp.ignoreV1V2PduSizeLimit=true

      All of the values in bold, what do they represent exactly? Is it safe to change them? What are the limits?

       

      Thanks in advance.

        • Re: How to tweak STM Agents
          Garrett Gross

          Deltona - 

          The "cap" is actually the data archive limit.  This is set so that, in the event that the agent is not being collected from in a timely fashion, it will collect data on its own but stop at a certain point.  That prevents the agent from filling up a volume with collected data.  Generally, this value is only adjusted when collection consistently takes longer than normal and no other workarounds help.

          On the agent memory values:

          These are all values of the heap, or amount of memory used by the JVM.

          "-XX:MaxPermSize" sets the permanent generation space value.  We usually leave this value at default except when asked by development to modify it.  More info on PermGen Space can be found here: http://blogs.oracle.com/jonthecollector/entry/presenting_the_permanent_generation

          http://blogs.oracle.com/jonthecollector/entry/presenting_the_permanent_generation The "Xms" value is the inital heap size or the amount of memory usage the process will start out with. The –Xms setting will pre-emptively allocate memory when the JVM is started. When no –Xms value is set, the JVM simply uses the default and dynamically grows the heap size up to the limit that is set with the –Xmx value. Therefore, unless you have an explicit need to pre-allocate memory for the heap, the –Xms value doesn’t really gain you much since the JVM will eventually grow the heap to the –Xmx size as needed. Neither the agent nor the server really have an explicit need for “up front” heap allocation so the –Xms setting is kind of a wash.

          The "Xmx" value is the maximum java heap size and is the one we are most concerned with.  Changing this value adjusts the maximum amount of memory that a specific process will utilize.  The only caveat to issue is that, in a 32 bit installation, the maximum heap size that can be set is around 1024mb (actually 1200mb, I think).  Allocating more will prevent the process from starting.

          With a 64 bit install, though, you are able to allocate as much as you have available on the box.

          If you notice, though, the default Xmx and Xms values are in bytes.  However, you can add a lower case "m" at the end to change it to megabyte notation (e.g. 1024m, 2048m, etc)

          Let me know if that helps!

            • Re: How to tweak STM Agents
              Deltona

              Hi Garrett,

              Thanks for the link, it's a good read and really makes you appreciate the work put into the agents. Reading through the document I can't help but ask myself the question of what is being stored in the permanent generation and what data, if any, is stored to disk. Also, how is data handled when the perm generation space has reached its limit, does it migrate or duplicate over into the heap?

              Say you have a 64 bit system with an agent installed on it. The agent MaxPermSize is set to 1024M and the heap size is set to the double which is 2147483648 (2048M).
              If the perm generation space needs more room for classes (why would it), does it move them over to the heap or does it have any memtodisk capabilities that would store the data indefinitely in a file, on a disk? Does it do this synchroneously or would it only copy to disk when the heap size is full?

              I ask this because i'd like to know whether data at some point gets flushed and can't be reused, rendering inconsistencies and or fragmented data which is returned back to STM server.

              I've had an issue with an agent that was configured to do file analysis on a server with close to a thousand directories and where the file counts reached tens of thousands. The agent would never complete the analysis and if it did (by means of rerunning the FA job with fewer directories to scan) the data returned would be inconsistent and it would overlap existing analytics.

              I won't pretend to know what is going on in the agent at that point but i could immagine that it has run out of memory space. All the files and directories being 'analysed' are being purged at some point because the agent is only running in the memory space and has no means of generaing temp data onto disk.

                • Re: How to tweak STM Agents
                  Garrett Gross

                  Deltona - 

                  I am not sure exactly how the memory would be handled in the example you gave but I am not sure that your issue is related to the JVM memory settings alone.

                  All data that is collected in the file analysis scan, though, is not held in the JVM memory for an extended period of time.  The agent scans the files, collects the data and is collected from by the collector service.  The collector service then writes this data to the database to be retrieved later and the data is wiped from the agent module's data directory.

                  If file analysis is not completing in the time frame you have specified (day, week), upping the java heap size alone will normally not help.  It may make it easier for the agent to perform other tasks while running file analysis but not by much.  You will need to adjust the file analysis cache settings as well.

                  You can adjust the file analysis cache settings in the agent configuration (wrench) menu by clicking "advanced fields" and scrolling down to the "file analysis" section.  More info here: http://knowledgebase.solarwinds.com/kb/questions/2373/How+do+I+increase+the+amount+of+cache+a+scan+can+use%3F

                  The other option (as you mentioned earlier) is to reduce the amount of shares the agent is scanning.

                  If you are still getting inconsistent and/or incomplete scans after adjusting the heap/cache values and reducing the amount of shares scanned, please contact our support team and we'll be happy to investigate further.