We are new to running NTA here so we are just trying to get up to speed. We set our system to keep 120 minutes of uncompressed data and 185 days of compressed data.
Right now we are using it in Category 1 mode. We just implemented some new circuits and are trying to see what chaos is exists.
The plan is to begin using it for Category 2 more and more with a little Category 3. We are undergoing a major application development push. During the past year we have gone from 10 application development contractors to around 40 while our internal staff remains the same at about 20. They are trying to push out as much new code before the administration change in January (I work for a state government agency).
We are forced sometimes into Category 3 since our level 1 field staff are not the quickest to report problems to us (level 2 and myself at level 3). Numerous times problems have gone on for weeks before we even hear about them. We are struggling to have our level 1 techs be more proactive with problem reporting. Because of this, I have given them access into Orion and NTA to see data pertaining to their sites. That way they can check on this and maybe notify us a bit sooner.
As for a Catgory 4, well, we have used some reports and plan to use many more to show management that the application development team needs additional training and tools to test their new systems over a WAN link. We already are trying to get them to re-write an application since it is a WAN killer that downloads all financial transactions from the mainframe and then filters out most of the data locally. For some reason it works great in their cube which has gigabit access to the mainframe while the end users are on the far side of a T-1 link.
We also plan on using it to show that we can not sustain the 10s of GBs of transfers that people seem to want to do. It is not uncommon for people to copy 50GB across a 3MB link because they want the GIS (maps) data locally. We plan on using the data from NTA and Orion to justify our attempts to change the business process.
We are currently in the process of turning up NetFlow and NDE in our environment, so we're just beginning to use the tool on a daily basis. Currently, we're keeping 120 minutes of uncompressed data and 14 days of compressed data. After we determine what NTA does to the size of our database, I expect to increase the amount of compressed data retained to at least 30 - 45 days.
As I am guessing is the case with most new implementations, we're pretty much only using the tool as a real time system. In the past we haven't had a good way to view traffic to and from the remote sites, so we're starting to get a handle on bandwidth hogs on our network. We've already used the tool to diagnose a problem with how clients receive antivirus updates.
As we bring more interfaces into NTA, I'm sure we'll begin to review data from the past week or two as needed for follow up to issues discovered using the tool as a real time system.
Aside from the real time monitoring, I think we'll gain a lot of value from looking at historical reports spanning the previous month to aid in capacity planning. We're able to see which sites are consistently using a majority of their bandwidth, but with NTA we'll be able to determine if the traffic consuming the bandwidth is valid traffic or not.
When we have NetFlow fully implemented, I expect our time will mostly be split between using the system as a real time tool and reviewing the previous month's historical data.
Netflow user from V1 days here. We use NTA almost exclusively for #1…real time conversation monitoring. We never look back more than two weeks.
#2. I mostly use NTA to view recent traffic stats.