1 Reply Latest reply on Sep 1, 2010 2:25 PM by Jan.Krivanek

    Netflow Storage Calulation


      I have been running NTA now for 23 days and have been collecting on only 5 interfaces. My current Netflow database size is reported to be 4897.375MB.

      Now if my math is correct that is roughly 42.53MB per interface per day of data. The problem is we are looking to up our interface count in NTA to 200 and retain that data for 90 days.

      So by that same math my Netflow database would grow by 8.3GB a day for 200 interfaces. Multiply 8.3GB a day by 90 days and i am looking at a storage requirement of 747GB to hold that data.

      Someone correct me if my math is off, but is there some way to compress this storage requirement through auto summarization or something?

        • Re: Netflow Storage Calulation

          It is likely that with adding interfaces with comparable bandwidth the growth of DB will grow linearly, however this is not true for extending the data retention – then the growth of DB space would be more comparable to logarithmic growth. That’s because of so called collapsing. NTA collapsing means that (with default settings) we will store data with one minute granularity for one hour, older data are collapsed to 15-minute granularity which is kept for one day, older data are again collapsed to one hour granularity and kept for 3 days, older data are again collapsed to one day granularity and kept for 30 days, older data are completely dropped.

          Uncompressed data are stored in NetFlowDetail… tables
          15-minute granularity data are in NetFlowSummary1 table
          1-hour granularity data are in NetFlowSummary2 table
          1-day granularity data are in NetFlowSummary3 table

          In your case, if you would keep 5 interfaces and extend the data retention to 90 days then all tables but NetFlowSummary3 would stay approximately same in size, so you should recalculate just size of NetFlowSummary3. Finally you would need to multiply the result by 40 (if you are going to have 40 times more interfaces).

          Would this result be more acceptable for you? If not, there could be a possibility to adjust the times of compression (by editing CollapseTriggers records in NetFlowSettings DB table, but such a change should be rather performed with assistance of SW support and would result in less granular data).