This discussion has been locked. The information referenced herein may be inaccurate due to age, software updates, or external references.
You can no longer post new replies to this discussion. If you have a similar question you can start a new discussion in this forum.

Orion NPM Architecture, Speed, and SQL

In my last post, "What NPM Tips and Tricks do You Have?" I asked about tips and tricks, expecting a mashup of different things from all over the NPM world and to a certain extent that's what happened. Interestingly, however, a large section of the thread turned into a discussion about two things: maps and speed.

There were certainly a lot of good map tips, and you can find more at Solarwinds Labs.  In fact, you can even find out how to make your boss happy with a Big Green Button.

The speed issue is particularly intriguing to me since there are a lot of times where, let's be honest here, NPM is a bit of a dog when it comes to response. The web interface is notoriously slow, and gets even worse when you have a ton of custom widgets, do-dads, and whatchamacallits loading on a screen. Several people mentioned that a lot of speed can be picked up by getting in at the database level and pre-packaging certain things.

ZachM wrote:

Stored Procedures and custom Views created in the DB save us countless man hours and, in my experience, working directly in the DB can really expand your knowledge of the architecture of NPM overall. I highly recommend every SolarWinds engineer to challenge themselves to learn more SQL. I am by no means a DBA, but I can pull every bit of data you can get from the website, and I can do it faster 90% of the time.

NPM is an incredibly flexible and extensible product, especially in recent revisions, and offers a lot of opportunity for people willing to really dig in behind the scenes. As usual, I have more questions:

* What SQL version and architecture are you using (separate database, named instances, etc.)?

* What architecture have you found helps in the speed department?

As an example of what I'm interested in: we run Cisco UCS servers, with VMware as the hypervisor layer, backed by NetApp FAS3240 fully licensed arrays, with Flash Cache, etc. We tier our storage manually and have full production SQL and Oracle instances virtualized.  The storage is connected to the UCS with an aggregated 80GB, and the UCS to the core at 160GB.

Parents
  • I've recently spent a lot of time and effort on this, even consulting with Atlantic Digital, Inc., and Solarwinds directly.

    the outcome is we are focusing on the SAN and the Data Stores presented to the DB Server.

    I was able to initially noticeably increase my web performance by moving to 4 SSDs (purchased at Fry's - so they are SATA SSDs) in Raid 10 format for redundancy.

    Here, I had the LOG .LDF file on a SAN drive and the DB .MDF file on the local SSD array.

    I had Atlantic Digital out for training and consultation and they had me move the LOG .LDF file onto the SSD drives.  This alone further DOUBLED my performance.

    We are about to do a migration from Big Brother to SAM, so we decided to go enterprise and get a dedicated SAN.  We are getting a Dell PV with 10 SAS SSDs that will have 3 separate "drives" on the DB Server.

    One for the DB, one for Log and Temp, and one for Netflow [FileGroup].

    I believe Solarwinds is working on a document on this very subject.

  • Yeah, disk I/O on the database servers is always a big performance bottleneck/opportunity.  My Oracle DBA and I work together on a lot of these same issues (moving certain mounts to certain disk arrays, etc.) to squeeze as much performance as possible out of the databases.  I haven't spent as much time as I'd like on the SQL Server side, but the benefits are the same, if executed a little differently.

    Good stuff, thanks!

Reply
  • Yeah, disk I/O on the database servers is always a big performance bottleneck/opportunity.  My Oracle DBA and I work together on a lot of these same issues (moving certain mounts to certain disk arrays, etc.) to squeeze as much performance as possible out of the databases.  I haven't spent as much time as I'd like on the SQL Server side, but the benefits are the same, if executed a little differently.

    Good stuff, thanks!

Children
No Data