This discussion has been locked. The information referenced herein may be inaccurate due to age, software updates, or external references.
You can no longer post new replies to this discussion. If you have a similar question you can start a new discussion in this forum.

Orion - SQL 2005 Server w/ SAN storage

Has anyone had any issues with running a separate SQL Server 2005 Enterprise but with SAN storage?


I'm just getting started with Orionv9sp2 - and now that the majority of my network has been discovered, I seem to be running into major performance issues.  My polling engine will not stay running.  Also, when I launch the Manager app from the server it takes forever (if it works) to open the Orion Database.  If it does open, then it usually locks up during the creating baseline phase.  My DB is almost 5 Gig.


My average disk queue length gets pretty high - sometimes 45-60 which according to support is way too high.  He also said using SAN storage wasn't recommended because of issues w/ speed. 


Anyone have any thoughts and/or similar experience?  I've tried to go through the config wizard multiple times - this gets the polling engine to start for a short time, but then it shuts down shortly after.

  • Have you examined the polling and sattistics collection completion percentages? In System Manager, go to the FIle menu and select Polling Status. Look at the ICMP and SNMP polling indices. You will likely find the values very low. Also, go to the Polling Engine Tuning app and see what the recommended polls per second are versus what you are actually running.


    How many elements are you monitoring? If the indications is tha tyou need to drastically increase the polls per second settings in order to monitor your network, but your CPU load and memory usage stats on the Orion polling engine are already high, you would need to add an additional pollin server and load balance between the two servers in order to ge the job done.

  • Thanks for the response.


    My ICMP polling index is 30000 out of 30000 and SNMP polling index is 2975 out of 30000.  Mx polls p/sec is 30.


    This is also just after a reconfigure and it is calculating the baseline.  I have 566 Nodes and 15000 interface elements.  Again, Just started running into issues after my initial discovery so I haven't even had a chance to unmanage some of the unnecessary nodes/elements.

  • We are in the process of upgrading from Orion 8.5.1 to Orion 9 and currently have Orion on the same server as SQL. I asked senior Solarwinds support for guidance before we upgrade so as to not make any costly mistakes. I was going down the path of using our SQL Enterprise server attached to a SAN (configured with their RAID requirements) but was told that under no circumstances should you do this. They have reported signifcant performance issues and data loss when the SQL is SAN storage based. This was confirmed on 9/10/08.


    So in other words - don't do it......

  • I'm running SQL 2000 on a separate server using SAN storage. It is running on Windows 2003 x64 connected via Fiberchannel to the SAN. It's running fine, and has been since December of last year. I also have NTA running on it.My database is around 26Gb.

  • I have been running Orion on one of our SQL clusters for since V7.x days.  If you are using a SQL Cluster the standard is pretty much that you are using a SAN.  I am running SQL 2005 Enterprise 64-bit for the SQL backend for Orion.  It is running on a 300GB/10K RPM disks, RAID-5 Storage Group on a 2GBs/FC loop.  Looking back over data from this year I never see AvgDisk Queue Length getting over .6.   I have 1800 Nodes I am monitoring with over 6000+ elements.  My database is over 35+GB.   


    In Q1 of next year we intend to setup a new SQL 2005 64-bit SQL Enterprise and the new storage will 15K RPM Disks running in a RAID-10. 


    Has SolarWinds even documented this that you should not use SAN storage for the Orion SQL backend?  I cannot seem to find this statement in the documentation that I can find from SolarWinds.  Actually I very suprised that anybody at SolarWinds would say not to use SAN storage since this is very much the standard for storage on Enterprise Networks. 


  • Has SolarWinds even documented this that you should not use SAN storage for the Orion SQL back end?  I cannot seem to find this statement in the documentation that I can find from SolarWinds.  Actually I very surprised that anybody at SolarWinds would say not to use SAN storage since this is very much the standard for storage on Enterprise Networks. 
     

    This is from our NetFlow Traffic Analyzer Administrative Guide Page 6: www.solarwinds.com/.../NetFlowAdministratorGuide.pdf

      Warning: The Only RAID configurations that should be used on an Orion NetFlow Traffic Analyzer installation are 0, 1, 0+1 or 1+0. Due to the High speed and large memory requirements of NetFlow data transactions, other RAID or SAN configurations should not be used as they may result in data loss and significantly decrease performance.



    I know we are discussing about Orion Specifically, but NetFlow can process massive amounts of data, we have seen this as a fact, and one of the Support People created this tutorial to help in assisting as to why there was an issue. www.solarwinds.com/.../gaps.htm

    I agree that SANs are great for storage of information, however, when you have an application such as Orion adding more information to the database, then if you log into the Website, we have to access this information, at the same time as Orion is writing more information, and for this SANs have historically created a bottleneck in performance.

  • < agree that SANs are great for storage of information, however, when you have an application such as Orion adding more information to the database, then if you log into the Website, we have to access this information, at the same time as Orion is writing more information, and for this SANs have historically created a bottleneck in performance.>>


      I am also running Netflow on Orion and this goes back to a SQL Cluster with SAN storage.  I could see if you where running iSCSI with 7200RPM SATA Disks for your SAN storage you could have a performance bottlekneck.  Looking over the Disk PerfMon data on our SQL cluster with the Orion database I don't see any peformance bottlekneck .  From my experience SAN storage solutions are implemented to eliminate disk bottleknecks in performance not create them.  If a SAN is presenting a disk bottlekneck them there is something not configured right with the SAN. 


    Consider this from a Enterprise level if you are runnin Orion you will want a SQL cluster for the backend.   If you are doing a  SQL cluster at the Enterprise level you are looking at using SAN storage propably using Fiber Channel.  I will have to keep a eye on things but I am not going to restructure my Orion SQL environment.


  • Here are some great Tips that Josh "Head Geek" Commented on a few months ago:

    Quick Summary:

    Head Geek's Top 5 Tips for Improving SQL Performance

    #5 - Add more RAM. Doesn't really matter how much you have, adding more will almost always help. Be sure that your SQL instance and OS are capable of consuming the additional RAM and if not make it so.

    #4 -  Just say "no" to RAID 5. It's great for application servers but horrible for database servers where I/O performance is important.

    #3 - Place the data and log files (.mdf and .ldf) on separate logical drives and separate channels or controllers.

    #2 - Unless your SAN is optimized for high I/O vs. large I/O stick with a locally attached disk array.

    #1 - Buy disk controllers with battery backed-up write-back cache. The more the better, but at least 256MB.


    --He also said this about SANs

    The Fast and the Furious - Orion, SQL, and SANs

     

  • Thanks fo the feedback Sean.  I never noticed any Disk queue length issues for Orion by running it off a SQL cluster on a SAN.  That being said I have been noticing some performance issues for Orion over the last several months.  Yesterday I moved the 50GB+ Orion database from RAID 5 10K RPM SAN storage with both the MDF and LDF file on the same drive. (Don't ask I am not the DBA)   The Database is now on 15K RPM RAID 10 storage with the LDF file is also now located on a separate disk.  So I was breaking several of the rules noted above.  Now I am just breaking one. :)  Of course somebody would say that RAID 10 can handle high I/O. 

  • I just resolved a major performance issue with SQL on a SAN.  I have almost 20000 pollers across 3 polling engines and my SAN queue length for the drive/LUN is 0% 98% of the time (does dump so this is normal and good).  I had HORRIBLE performance before as well.
    What I did to resolve was go into Database server properties (actual server not NPM DB) and enable AWE (check box) on the Memory tab.  I then set minimum memory used to 8GB (32GB available).  Research it, but it will make ALL the difference.  You will have to change a local security setting but google it and you'll get instructions on how to do it.  very straight forward.

    Hope that helps.