5 Replies Latest reply on Jul 6, 2018 6:36 PM by mesverrum

    How much bytes does it takes to monitor a node?

    timt

      Hi,

      I tried this question with support and have not got a response yet... don't be confused with the question how many bytes it takes to get to the center of Solarwinds

       

      My current issue is that I am running out of SQL disk space, and with the upgrade of NTA using the same SQL server, I need to plan for additional resources, plus plan/forecast  growth for the next 5 years.


      Question, on average, how much disk space in bytes or Mb does it cost to monitor a single node.  Ball park number is good as well, since I know it varies from alerts to poll, etc.  Maybe just a default number?  1mb, 2mb?

      Thank you in advance.

        • Re: How much bytes does it takes to monitor a node?
          mesverrum

          It's honestly incredibly variable depending on so many things that vary from environment to environment.  What are your poll intervals/retention periods? Do you have SAM monitors on it and how many components are in those? How many interfaces/volumes are on the node? Do you use UNDP?  Does the device support hardware health information and how many sensors does it present.  Are you collecting Netflow from the device? Are you getting traps/syslogs?  There are so many options that it really makes it a hassle to try and assess the impact of each individual node.  The most efficient way to do it for your specific scenario is to just take the current size of your db and divide it by the number of nodes you have in your environment and assume it will take at LEAST that much space for each additional node in the future.  If you asked me to project out the size my Orion instance in 2013 until now I would have been so completely wrong it wouldn't even be funny, there have been so many new modules and features released over that time and each feature you use increases these disk requirements. 

           

          Another factor to consider is how tight of a ship do you run on your environment?  In almost every environment I go to there are elements that people are monitoring that provide them no value because nobody looks at them, or it is something like a loopback or null interface or a floppy drive that they've collected data on for years for no good reason.  If you are only tracking useful information and only retaining it for useful amounts of time I find that even a big instance rarely exceeds 2-300 gb.  Where I find clients with db bigger than that they always have had a ton of clutter that we ended up cleaning out of the system.  The other case where db sizes can balloon is when people crank up their retention periods to the max values.  In theory people might think it sounds like a good idea to have years and years of retention in Orion but in practice it is not very useful since the longest they let you retain detailed or hourly averages is just 6 months, everything older than that is daily averages and if you actually need to do historical reporting then that level of granularity is usually too low to be helpful.  For people who want to have a long and detailed history of performance metrics I typically prefer to help them set up some kind of ETL process to export out the data periodically to a separate cold storage database.  That way Orion is still efficient at displaying current data without the db getting big enough to slow down the app, and they still have the archive of historical values that they can use to do long term analysis.  Trying to do long term storage inside Orion is a pain because if you ever need to delete a node or other object then the system will dump all the historical data for the object so people start doing things like leaving dead nodes unmanaged in their system for years after they decom the server. If you have the older data archived out somewhere you can still pull it up as needed even if someone does something dumb and accidentally erased an important WAN interface or similar.

          2 of 2 people found this helpful
            • Re: How much bytes does it takes to monitor a node?
              mesverrum

              One point to add to that, those numbers do not take into consideration the new nta database.  It hasn't really been out long enough for any clients I know of to get a feel for how much space it will be using and the documentation regarding the database size requirements seems to not have been updated yet so I'm not sure if we can expect the same amount of disk usage or not. in 2-3 months I think there will be better information out about what is "typical" for the new NTA.

               

              Overall I would say the sizing recommendations I'm seeing here are pretty much in line with what I see in the field:

              SolarWinds Orion Platform requirements

              • Re: How much bytes does it takes to monitor a node?
                timt

                Thanks for the response mesverrum , it was very helpful..

                 

                I think I will get the total nodes and divide up the DB size to get the average.

                 

                One more, you mentioned about an ETL process to export/archive the DB, can you tell me more about this?

                  • Re: How much bytes does it takes to monitor a node?
                    mesverrum

                    I don't run into clients who need it often enough to have a SOP for it, but basically you figure out which metrics you need the long term history on, write a bunch of queries to gather that info from Orion, optimize the structure of your new db for the kind of reports you are likely to be running, and then set up jobs to periodically pull that info from your live Orion instance in and insert it into the cold storage.  Pretty standard stuff for a DBA or people who do business analytics on the regs.  The only tricky part is that someone in the process needs to have a solid grasp on where all the stuff you need lives inside the Orion db, and a good understanding of the reporting requirements that you have or are likely to need in the future so you can make sure all the goods are in the archive and indexed in the ways that will work best for your reports.

                      • Re: How much bytes does it takes to monitor a node?
                        mesverrum

                        The poor man's ETL process is just to schedule reports with the metrics you think you need and have those reports save to a network share in csv or excel format every day. The nice thing is it's easy to implement with the built in tools, most orgs tend to have at least one excel wizard on staff, and if you ever do end up deciding to plunge into a full DB based archival scheme you have the data already and it can be imported into that new db at any point in the future.