I have exactly the same opinion - for large Netflow collections, you need the ability to offload to a separate server.
I have also suggested this directly to the development team.
I would also like to be able to separate the Netflow collector function from the poller server as well.
I would like to resurrect this thread. Are there any plans to host netflow database seperately from NPM database? We have just deployed NPM and it is very obvious that netflow produces a tremendous amount of data, and we are only monitoring 14 interfaces for netflow. I can't imaging how larger companies are handling this quantity of data.
We have just released NTA 3.1. For performance reasons and as a comprimise to your request, we decided to put the NTA tables that contain significant amounts of data into separate filegroups. You can move these filegroups to separate harddrives.
Is that at all helpful to you?
Thanks for the feedback. Sounds like SW is aware of the issue and trying to come up with a solution.
We are a small shop with a tight budget. We bought Orion because it gave us the biggest "bang for the buck". But with netflow the gun has backfired on us because we now have to install a seperate database server. The most important aspect of Orion for us is alerting, and we don't want it to go down if at all possible. The more we distribute the core NPM software it the more points of failure we introduce. If we lose a database server then the app is down and we don't get alerts. Additionally, if the database is getting hammered by Netflow then we risk not getting alerts.
We feel that for a mid-size company such as us should have the option to run NPM with alerting enabled on a SQLExpress instance running locally on the application server. NPM itself with polling properly tuned and optimally tuned historical data storage should work easily within the confines of a 4GB SQLExpress instance.
If we chose to run netflow (which we have) then we should be able to store netflow data on a SEPERATE database server to minimize impact on the core app, for better performance and for larger storage purposes.
With netflow the cost of doing business just went way up because it requires a seperate "powerful" SQL server and we now introduce more points of failure.
The are many well documented posts on this forum of problems with netflow impacting the Orion database. Here is my own personal example of the impact of netflow on the core application:
We ran NPM without netflow enabled on 370 metrics over four days and our database was only 39MB. We then turned on netflow for only 12 interfaces and within 48 hours our database has grown to 523MB. We are running NPM 9.1 sp3 and netflow 3.0.
Thanks for listening.
What size interfaces do you have flows enabled on?
2 - GB
3 - 100MB (fast)
2 - 3MB
2 - 6MB
2 - tunnels
11 interfaces total in netflow